how to extract frequency associated with fft values in python - python

I used fft function in numpy which resulted in a complex array. How to get the exact frequency values?

np.fft.fftfreq tells you the frequencies associated with the coefficients:
import numpy as np
x = np.array([1,2,1,0,1,2,1,0])
w = np.fft.fft(x)
freqs = np.fft.fftfreq(len(x))
for coef,freq in zip(w,freqs):
if coef:
print('{c:>6} * exp(2 pi i t * {f})'.format(c=coef,f=freq))
# (8+0j) * exp(2 pi i t * 0.0)
# -4j * exp(2 pi i t * 0.25)
# 4j * exp(2 pi i t * -0.25)
The OP asks how to find the frequency in Hertz.
I believe the formula is frequency (Hz) = abs(fft_freq * frame_rate).
Here is some code that demonstrates that.
First, we make a wave file at 440 Hz:
import math
import wave
import struct
if __name__ == '__main__':
# http://stackoverflow.com/questions/3637350/how-to-write-stereo-wav-files-in-python
# http://www.sonicspot.com/guide/wavefiles.html
freq = 440.0
data_size = 40000
fname = "test.wav"
frate = 11025.0
amp = 64000.0
nchannels = 1
sampwidth = 2
framerate = int(frate)
nframes = data_size
comptype = "NONE"
compname = "not compressed"
data = [math.sin(2 * math.pi * freq * (x / frate))
for x in range(data_size)]
wav_file = wave.open(fname, 'w')
wav_file.setparams(
(nchannels, sampwidth, framerate, nframes, comptype, compname))
for v in data:
wav_file.writeframes(struct.pack('h', int(v * amp / 2)))
wav_file.close()
This creates the file test.wav.
Now we read in the data, FFT it, find the coefficient with maximum power,
and find the corresponding fft frequency, and then convert to Hertz:
import wave
import struct
import numpy as np
if __name__ == '__main__':
data_size = 40000
fname = "test.wav"
frate = 11025.0
wav_file = wave.open(fname, 'r')
data = wav_file.readframes(data_size)
wav_file.close()
data = struct.unpack('{n}h'.format(n=data_size), data)
data = np.array(data)
w = np.fft.fft(data)
freqs = np.fft.fftfreq(len(w))
print(freqs.min(), freqs.max())
# (-0.5, 0.499975)
# Find the peak in the coefficients
idx = np.argmax(np.abs(w))
freq = freqs[idx]
freq_in_hertz = abs(freq * frate)
print(freq_in_hertz)
# 439.8975

Here we deal with the Numpy implementation of the fft.
Frequencies associated with DFT values (in python)
By fft, Fast Fourier Transform, we understand a member of a large family of algorithms that enable the fast computation of the DFT, Discrete Fourier Transform, of an equisampled signal.
A DFT converts an ordered sequence of N complex numbers to an ordered sequence of N complex numbers, with the understanding that both sequences are periodic with period N.
In many cases, you think of
a signal x, defined in the time domain, of length N, sampled at a constant interval dt,¹
its DFT X (here specifically X = np.fft.fft(x)), whose elements are sampled on the frequency axis with a sample rate dω.
Some definition
the period (aka duration²) of the signal x, sampled at dt with N samples is is
T = dt*N
the fundamental frequencies (in Hz and in rad/s) of X, your DFT are
df = 1/T
dω = 2*pi/T # =df*2*pi
the top frequency is the Nyquist frequency
ny = dω*N/2
(NB: the Nyquist frequency is not dω*N)³
The frequencies associated with a particular element in the DFT
The frequencies corresponding to the elements in X = np.fft.fft(x) for a given index 0<=n<N can be computed as follows:
def rad_on_s(n, N, dω):
return dω*n if n<N/2 else dω*(n-N)
or in a single sweep
ω = np.array([dω*n if n<N/2 else dω*(n-N) for n in range(N)])
if you prefer to consider frequencies in Hz, s/ω/f/
f = np.array([df*n if n<N/2 else df*(n-N) for n in range(N)])
Using those frequencies
If you want to modify the original signal x -> y applying an operator in the frequency domain in the form of a function of frequency only, the way to go is computing the ω's and
Y = X*f(ω)
y = ifft(Y)
Introducing np.fft.fftfreq
Of course numpy has a convenience function np.fft.fftfreq that returns dimensionless frequencies rather than dimensional ones but it's as easy as
f = np.fft.fftfreq(N)*N*df
ω = np.fft.fftfreq(N)*N*dω
Because df = 1/T and T = N/sps (sps being the number of samples per second) one can also write
f = np.fft.fftfreq(N)*sps
Notes
Dual to the sampling interval dt there is the sampling rate, sr, or how many samples are taken during a unit of time; of course dt=1/sr and sr=1/dt.
Speaking of a duration, even if it is rather common, hides the
fundamental idea of periodicity.
The concept of Nyquist frequency is clearly exposed in any textbook dealing with the analysis of time signals, and also in the linked Wikipedia article. Does it suffice to say that information cannot be created?

The frequency is just the index of the array. At index n, the frequency is 2πn / the array's length (radians per unit). Consider:
>>> numpy.fft.fft([1,2,1,0,1,2,1,0])
array([ 8.+0.j, 0.+0.j, 0.-4.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+4.j,
0.+0.j])
the result has nonzero values at indices 0, 2 and 6. There are 8 elements. This means
2πit/8 × 0 2πit/8 × 2 2πit/8 × 6
8 e - 4i e + 4i e
y ~ ———————————————————————————————————————————————
8

Related

Correct amplitude of the python fft (for a Skew normal distribution)

The Situation
I am currently writing a program that will later on be used to analyze a signal that is somewhat of a asymmetric Gaussian. I am interested in how many frequencies I need to reproduce the signal somewhat exact and especially the amplitudes of those frequencies.
Before I input the real data I'm testing the program with a default (asymmetric) Gaussian, as can be seen in the code below.
My Problem
To ensure I that get the amplitudes right, I am rebuilding the original signal using the whole frequency spectrum, but there are some difficulties. I get to reproduce the signal somewhat well multiplying amp with 0.16, which I got by looking at the fraction rebuild/original. Of course, this is really unsatisfying and can't be the correct solution.
To be precise the difference is not dependant on the time length and seems to be a Gaussian too, following the form of the original, increasing in asymmetry according to the Skewnorm function itself. The amplitude of the difference function is correlated linear to 'height'.
My Question
I am writing this post because I am out of ideas for getting the amplitude right. Maybe anyone has had the same / a similar problem and can share their solution for this / give a hint.
Further information
Before focusing on a (asymmetric) Gaussian I analyzed periodic signals and rectangular pulses, which sadly were very unstable to variations in the time length of the input signal. In this context, I experimented with window functions, which seemed to speed up the process and increase the stability, the reason being that I had to integrate the peaks. Working with the Gaussian I got told to take each peak, received via the bare fft and ditch the integration approach, therefore my incertitude considering the amplitude described above. Maybe anyone got an opinion on the approach chosen by me and if necessary can deliver an improvement.
Code
from numpy.fft import fft, fftfreq
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import skewnorm
np.random.seed(1234)
def data():
height = 1
data = height * skewnorm.pdf(t, a=0, loc=t[int(N/2)])
# noise_power = 1E-6
# noise = np.random.normal(scale=np.sqrt(noise_power), size=t.shape)
# data += noise
return data
def fft_own(data):
freq = fftfreq(N, dt)
data_fft = fft(data) * np.pi
amp = 2/N * np.abs(data_fft) # * factor (depending on t1)
# amp = 2/T * np.abs(data_fft)**2
phase = np.angle(data_fft)
peaks, = np.where(amp >= 0) # use whole spectrum for rebuild
return freq, amp, phase, peaks
def rebuild(fft_own):
freq, amp, phase, peaks = fft_own
df = freq[1] - freq[0]
data_rebuild = 0
for i in peaks:
amplitude = amp[i] * df
# amplitude = amp[i] * 0.1
# amplitude = np.sqrt(amp[i] * df)
data_rebuild += amplitude * np.exp(0+1j * (2*np.pi * freq[i] * t
+ phase[i]))
f, ax = plt.subplots(1, 1)
# mask = (t >= 0) & (t <= t1-1)
ax.plot(t, data_init, label="initial signal")
ax.plot(t, np.real(data_rebuild), label="rebuild")
# ax.plot(t[mask], (data_init - np.real(data_rebuild))[mask], label="diff")
ax.set_xlim(0, t1-1)
ax.legend()
t0 = 0
t1 = 10 # diff(t0, t1) ∝ df
# T = t1- t0
N = 4096
t = np.linspace(t0, t1, int(N))
dt = (t1 - t0) / N
data_init = data()
fft_init = fft_own(data_init)
rebuild_init = rebuild(fft_init)
You should get a perfect reconstruction if you divide amp by N, and remove all your other factors.
Currently you do:
data_fft = fft(data) * np.pi # Multiply by pi
amp = 2/N * np.abs(data_fft) # Multiply by 2/N
amplitude = amp[i] * df # Multiply by df = 1/(dt*N) = 1/10
This means that you currently multiply by a total of pi * 2 / 10, or 0.628, that you shouldn't (only the 1/N factor in there is correct).
Correct code:
def fft_own(data):
freq = fftfreq(N, dt)
data_fft = fft(data)
amp = np.abs(data_fft) / N
phase = np.angle(data_fft)
peaks, = np.where(amp >= 0) # use whole spectrum for rebuild
return freq, amp, phase, peaks
def rebuild(fft_own):
freq, amp, phase, peaks = fft_own
data_rebuild = 0
for i in peaks:
data_rebuild += amp[i] * np.exp(0+1j * (2*np.pi * freq[i] * t
+ phase[i]))
Your program can be significantly simplified by using ifft. Simply set to 0 those frequencies in data_fft that you don't want to include in the reconstruction, and apply ifft to it:
data_fft = fft(data)
data_fft[np.abs(data_fft) < threshold] = 0
rebuild = ifft(data_fft).real
Note that the Fourier transform of a Gaussian is a Gaussian, so you won't be picking out individual peaks, you are picking a compact range of frequencies that will always include 0. This is an ideal low-pass filter.

Fourier Transform - strange results

I'm trying to make some example of FFTs. The idea here is to have 3 wavelengths for 3 different musical notes (A, C, E), add them together (to form the aminor chord) and then do an FFT to retrieve the original frequencies.
import numpy as np
import matplotlib.pyplot as plt
import scipy.fft
def generate_sine_wave(freq, sample_rate, duration):
x = np.linspace(0, duration, int(sample_rate * duration), endpoint=False)
frequencies = x * freq
# 2pi because np.sin takes radians
y = np.sin(2 * np.pi * frequencies)
return x, y
def main():
# Frequency of note in Aminor chord (A, C, E)
# note_names = ('A', 'C', 'E')
# fs = (440, 261.63, 329.63)
fs = (27.50, 16.35, 20.60)
# duration, in seconds.
duration = .5
# sample rate. determines how many data points the signal uses to represent
# the sine wave per second. So if the signal had a sample rate of 10 Hz and
# was a five-second sine wave, then it would have 10 * 5 = 50 data points.
sample_rate = 1000
fig, ax = plt.subplots(5)
all_wavelengths = []
# Create a linspace, with N samples from 0 to duration
# x = np.linspace(0.0, T, N)
for i, f in enumerate(fs):
x, y = generate_sine_wave(f, sample_rate, duration)
# y = np.sin(2 * np.pi * F * x)
all_wavelengths.append(y)
ax[i].plot(x, y)
# sum of all notes
aminor = np.sum(all_wavelengths, axis=0)
ax[i].plot(x, aminor)
yf = np.abs(scipy.fft.rfft(aminor))
xf = scipy.fft.rfftfreq(int(sample_rate * duration), 1 / sample_rate)
ax[i + 1].plot(xf, yf)
ax[i + 1].vlines(fs, ymin=0, ymax=(max(yf)), color='purple')
plt.show()
if __name__ == '__main__':
main()
However, the FFT plot (last subplot) does not have the proper peak frequencies (highlighted through vertical purple lines). Why is that?
The FFT will only recover the contained frequencies exactly if the sampling window covers a multiple of the signal's period. Otherwise, if there is a "remainder", the frequency peaks will deviate from the exact values.
Since your A-minor signal contains three distinct frequencies, 27.50, 16.35, 20.60 Hz, you need a sampling duration which covers a multiple of the period for each of those components. In order to find that duration, you can compute the least common multiple of each of the fractional parts of the frequencies:
>>> import math
>>> math.lcm(50, 35, 60, 100)
2100
Note that we're including 100 here because the multiple also needs to satisfy the condition to sample a whole period. The above result implies that for a duration of 21 seconds, the frequencies will be recovered perfectly. Of course, any other multiple of 21 seconds will work as well. The following plot is obtained for a duration of 21 seconds:
I think that - within margin of error - the results do in-fact match your frequencies:
You can see in your frequency plot that the closest frequency in the plot to your actual frequencies do indeed have the highest amplitude.
However, because this is a DFT algorithm, and so the frequencies being returned are discrete, they don't exactly match the frequencies you used to construct your sample.
What you can try is making your sample size (ie the number of time points in your input data) either longer and/or a multiple of your input wavelengths. That should increase the frequency resolution and/or move the sampled output frequencies closer to input frequencies.

Speeding up normal distribution probability mass allocation

We have N users with P avg. points per user, where each point is a single value between 0 and 1. We need to distribute the mass of each point using a normal distribution with a known density of 0.05 as the points have some uncertainty. Additionally, we need to wrap the mass around 0 and 1 such that e.g. a point at 0.95 will also allocate mass around 0. I've provided a working example below, which bins the normal distribution into D=50 bins. The example uses the Python typing module, but you can ignore that if you'd like.
from typing import List, Any
import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
D = 50
BINS: List[float] = np.linspace(0, 1, D + 1).tolist()
def probability_mass(distribution: Any, x0: float, x1: float) -> float:
"""
Computes the area under the distribution, wrapping at 1.
The wrapping is done by adding the PDF at +- 1.
"""
assert x1 > x0
return (
(distribution.cdf(x1) - distribution.cdf(x0))
+ (distribution.cdf(x1 + 1) - distribution.cdf(x0 + 1))
+ (distribution.cdf(x1 - 1) - distribution.cdf(x0 - 1))
)
def point_density(x: float) -> List[float]:
distribution: Any = scipy.stats.norm(loc=x, scale=0.05)
density: List[float] = []
for i in range(D):
density.append(probability_mass(distribution, BINS[i], BINS[i + 1]))
return density
def user_density(points: List[float]) -> Any:
# Find the density of each point
density: Any = np.array([point_density(p) for p in points])
# Combine points and normalize
combined = density.sum(axis=0)
return combined / combined.sum()
if __name__ == "__main__":
# Example for one user
data: List[float] = [.05, .3, .5, .5]
density = user_density(data)
# Example for multiple users (N = 2)
print([user_density(x) for x in [[.3, .5], [.7, .7, .7, .9]]])
### NB: THE REMAINING CODE IS FOR ILLUSTRATION ONLY!
### NB: THE IMPORTANT THING IS TO COMPUTE THE DENSITY FAST!
middle: List[float] = []
for i in range(D):
middle.append((BINS[i] + BINS[i + 1]) / 2)
plt.bar(x=middle, height=density, width=1.0 / D + 0.001)
plt.xlim(0, 1)
plt.xlabel("x")
plt.ylabel("Density")
plt.show()
In this example N=1, D=50, P=4. However, we want to scale this approach to N=10000 and P=100 while being as fast as possible. It's unclear to me how we'd vectorize this approach. How do we best speed up this?
EDIT
The faster solution can have slightly different results. For instance, it could approximate the normal distribution instead of using the precise normal distribution.
EDIT2
We only care about computing density using the user_density() function. The plot is only to help explain the approach. We do not care about the plot itself :)
EDIT3
Note that P is the avg. points per user. Some users may have more and some may have less. If it helps, you can assume that we can throw away points such that all users have a max of 2 * P points. It's fine to ignore this part while benchmarking as long as the solution can handle a flexible # of points per user.
You could get below 50ms for largest case (N=10000, AVG[P]=100, D=50) by using using FFT and creating data in numpy friendly format. Otherwise it will be closer to 300 msec.
The idea is to convolve a single normal distribution centered at 0 with a series Dirac deltas.
See image below:
Using circular convolution solves two issues.
naturally deals with wrapping at the edges
can be efficiently computed with FFT and Convolution Theorem
First one must create a distribution to be copied. Function mk_bell() created a histogram of a normal distribution of stddev 0.05 centered at 0.
The distribution wraps around 1. One could use arbitrary distribution here. The spectrum of the distribution is computed are used for fast convolution.
Next a comb-like function is created. The peaks are placed at indices corresponding to peaks in user density. E.g.
peaks_location = [0.1, 0.3, 0.7]
D = 10
maps to
peak_index = (D * peak_location).astype(int) = [1, 3, 7]
dist = [0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0] # ones at [1, 3, 7]
You can quickly create a composition of Diract Deltas by computing indices of the bins for each peak location with help of np.bincount() function.
In order to speed things even more one can compute comb-functions for user-peaks in parallel.
Array dist is 2D-array of shape NxD. It can be linearized to 1D array of shape (N*D). After this change element on position [user_id, peak_index] will be accessible from index user_id*D + peak_index.
With numpy-friendly input format (described below) this operation is easily vectorized.
The convolution theorem says that spectrum of convolution of two signals is equal to product of spectrums of each signal.
The spectrum is compute with numpy.fft.rfft which is a variant of Fast Fourier Transfrom dedicated to real-only signals (no imaginary part).
Numpy allows to compute FFT of each row of the larger matrix with one command.
Next, the spectrum of convolution is computed by simple multiplication and use of broadcasting.
Next, the spectrum is computed back to "time" domain by Inverse Fourier Transform implemented in numpy.fft.irfft.
To use the full speed of numpy one should avoid variable size data structure and keep to fixed size arrays. I propose to represent input data as three arrays.
uids the identifier for user, integer 0..N-1
peaks, the location of the peak
mass, the mass of the peek, currently it is 1/numer-of-peaks-for-user
This representation of data allows quick vectorized processing.
Eg:
user_data = [[0.1, 0.3], [0.5]]
maps to:
uids = [0, 0, 1] # 2 points for user_data[0], one from user_data[1]
peaks = [0.1, 0.3, 0.5] # serialized user_data
mass = [0.5, 0.5, 1] # scaling factors for each peak, 0.5 means 2 peaks for user 0
The code:
import numpy as np
import matplotlib.pyplot as plt
import time
def mk_bell(D, SIGMA):
# computes normal distribution wrapped and centered at zero
x = np.linspace(0, 1, D, endpoint=False);
x = (x + 0.5) % 1 - 0.5
bell = np.exp(-0.5*np.square(x / SIGMA))
return bell / bell.sum()
def user_densities_by_fft(uids, peaks, mass, D, N=None):
bell = mk_bell(D, 0.05).astype('f4')
sbell = np.fft.rfft(bell)
if N is None:
N = uids.max() + 1
# ensure that peaks are in [0..1) internal
peaks = peaks - np.floor(peaks)
# convert peak location from 0-1 to the indices
pidx = (D * (peaks + uids)).astype('i4')
dist = np.bincount(pidx, mass, N * D).reshape(N, D)
# process all users at once with Convolution Theorem
sdist = np.fft.rfft(dist)
sdist *= sbell
res = np.fft.irfft(sdist)
return res
def generate_data(N, Pmean):
# generateor for large data
data = []
for n in range(N):
# select P uniformly from 1..2*Pmean
P = np.random.randint(2 * Pmean) + 1
# select peak locations
chunk = np.random.uniform(size=P)
data.append(chunk.tolist())
return data
def make_data_numpy_friendly(data):
uids = []
chunks = []
mass = []
for uid, peaks in enumerate(data):
uids.append(np.full(len(peaks), uid))
mass.append(np.full(len(peaks), 1 / len(peaks)))
chunks.append(peaks)
return np.hstack(uids), np.hstack(chunks), np.hstack(mass)
D = 50
# demo for simple multi-distribution
data, N = [[0, .5], [.7, .7, .7, .9], [0.05, 0.3, 0.5, 0.5]], None
uids, peaks, mass = make_data_numpy_friendly(data)
dist = user_densities_by_fft(uids, peaks, mass, D, N)
plt.plot(dist.T)
plt.show()
# the actual measurement
N = 10000
P = 100
data = generate_data(N, P)
tic = time.time()
uids, peaks, mass = make_data_numpy_friendly(data)
toc = time.time()
print(f"make_data_numpy_friendly: {toc - tic}")
tic = time.time()
dist = user_densities_by_fft(uids, peaks, mass, D, N)
toc = time.time()
print(f"user_densities_by_fft: {toc - tic}")
The results on my 4-core Haswell machine are:
make_data_numpy_friendly: 0.2733159065246582
user_densities_by_fft: 0.04064297676086426
It took 40ms to process the data. Notice that processing data to numpy friendly format takes 6 times more time than the actual computation of distributions.
Python is really slow when it comes to looping.
Therefore I strongly recommend to generate input data directly in numpy-friendly way in the first place.
There are some issues to be fixed:
precision, can be improved by using larger D and downsampling
accuracy of peak location could be further improved by widening the spikes.
performance, scipy.fft offers move variants of FFT implementation that may be faster
This would be my vectorized approach:
data = np.array([0.05, 0.3, 0.5, 0.5])
np.random.seed(31415)
# random noise
randoms = np.random.normal(0,1,(len(data), int(1e5))) * 0.05
# samples with noise
samples = data[:,None] + randoms
# wrap [0,1]
samples = (samples % 1).ravel()
# histogram
hist, bins, patches = plt.hist(samples, bins=BINS, density=True)
Output:
I was able to reduce the time from about 4 seconds per sample of 100 datapoints to about 1 ms per sample.
It looks to me like you're spending quite a lot of time simulating a very large number of normal distributions. Since you're dealing with a very large sample size anyway, you may as well just use standard normal distribution values, because it'll all just average out anyway.
I recreated your approach (BaseMethod class), then created an optimized class (OptimizedMethod class), and evaluated them using a timeit decorator. The primary difference in my approach is the following line:
# Generate a standardized set of values to add to each sample to simulate normal distribution
self.norm_vals = np.array([norm.ppf(x / norm_val_n) * 0.05 for x in range(1, norm_val_n, 1)])
This creates a generic set of datapoints based on an inverse normal cumulative distribution function that we can add to each datapoint to simulate a normal distribution around that point. Then we just reshape the data into user samples and run np.histogram on the samples.
import numpy as np
import scipy.stats
from scipy.stats import norm
import time
# timeit decorator for evaluating performance
def timeit(method):
def timed(*args, **kw):
ts = time.time()
result = method(*args, **kw)
te = time.time()
print('%r %2.2f ms' % (method.__name__, (te - ts) * 1000 ))
return result
return timed
# Define Variables
N = 10000
D = 50
P = 100
# Generate sample data
np.random.seed(0)
data = np.random.rand(N, P)
# Run OP's method for comparison
class BaseMethod:
def __init__(self, d=50):
self.d = d
self.bins = np.linspace(0, 1, d + 1).tolist()
def probability_mass(self, distribution, x0, x1):
"""
Computes the area under the distribution, wrapping at 1.
The wrapping is done by adding the PDF at +- 1.
"""
assert x1 > x0
return (
(distribution.cdf(x1) - distribution.cdf(x0))
+ (distribution.cdf(x1 + 1) - distribution.cdf(x0 + 1))
+ (distribution.cdf(x1 - 1) - distribution.cdf(x0 - 1))
)
def point_density(self, x):
distribution = scipy.stats.norm(loc=x, scale=0.05)
density = []
for i in range(self.d):
density.append(self.probability_mass(distribution, self.bins[i], self.bins[i + 1]))
return density
#timeit
def base_user_density(self, data):
n = data.shape[0]
density = np.empty((n, self.d))
for i in range(data.shape[0]):
# Find the density of each point
row_density = np.array([self.point_density(p) for p in data[i]])
# Combine points and normalize
combined = row_density.sum(axis=0)
density[i, :] = combined / combined.sum()
return density
base = BaseMethod(d=D)
# Only running base method on first 2 rows of data because it's slow
density = base.base_user_density(data[:2])
print(density[:2, :5])
class OptimizedMethod:
def __init__(self, d=50, norm_val_n=50):
self.d = d
self.norm_val_n = norm_val_n
self.bins = np.linspace(0, 1, d + 1).tolist()
# Generate a standardized set of values to add to each sample to simulate normal distribution
self.norm_vals = np.array([norm.ppf(x / norm_val_n) * 0.05 for x in range(1, norm_val_n, 1)])
#timeit
def optimized_user_density(self, data):
samples = np.empty((data.shape[0], data.shape[1], self.norm_val_n - 1))
# transform datapoints to normal distributions around datapoint
for i in range(self.norm_vals.shape[0]):
samples[:, :, i] = data + self.norm_vals[i]
samples = samples.reshape(samples.shape[0], -1)
#wrap around [0, 1]
samples = samples % 1
#loop over samples for density
density = np.empty((data.shape[0], self.d))
for i in range(samples.shape[0]):
hist, bins = np.histogram(samples[i], bins=self.bins)
density[i, :] = hist / hist.sum()
return density
om = OptimizedMethod()
#Run optimized method on first 2 rows for apples to apples comparison
density = om.optimized_user_density(data[:2])
#Run optimized method on full data
density = om.optimized_user_density(data)
print(density[:2, :5])
Running on my system, the original method took about 8.4 seconds to run on 2 rows of data, while the optimized method took 1 millisecond to run on 2 rows of data and completed 10,000 rows in 4.7 seconds. I printed the first five values of the first 2 samples for each method.
'base_user_density' 8415.03 ms
[[0.02176227 0.02278653 0.02422535 0.02597123 0.02745976]
[0.0175103 0.01638513 0.01524853 0.01432158 0.01391156]]
'optimized_user_density' 1.09 ms
'optimized_user_density' 4755.49 ms
[[0.02142857 0.02244898 0.02530612 0.02612245 0.0277551 ]
[0.01673469 0.01653061 0.01510204 0.01428571 0.01326531]]

Lowpass filter with a time-varying cutoff frequency, with Python

How to apply a lowpass filter, with cutoff frequency varying linearly (or with a more general curve than linear) from e.g. 10000hz to 200hz along time, with numpy/scipy and possibly no other library?
Example:
at 00:00,000, lowpass cutoff = 10000hz
at 00:05,000, lowpass cutoff = 5000hz
at 00:09,000, lowpass cutoff = 1000hz
then cutoff stays at 1000hz during 10 seconds, then cutoff decreases down to 200hz
Here is how to do a simple 100hz lowpass:
from scipy.io import wavfile
import numpy as np
from scipy.signal import butter, lfilter
sr, x = wavfile.read('test.wav')
b, a = butter(2, 100.0 / sr, btype='low') # Butterworth
y = lfilter(b, a, x)
wavfile.write('out.wav', sr, np.asarray(y, dtype=np.int16))
but how to make the cutoff vary?
Note: I've already read Applying time-variant filter in Python but the answer is quite complex (and it applies to many kinds of filter in general).
One comparatively easy method is to keep the filter fixed and modulate signal time instead. For example, if signal time runs 10x faster a 10KHz lowpass will act like a 1KHz lowpass in standard time.
To do this we need to solve a simple ODE
dy 1
-- = ----
dt f(y)
Here t is modulated time y real time and f the desired cutoff at time y.
Prototype implementation:
from __future__ import division
import numpy as np
from scipy import integrate, interpolate
from scipy.signal import butter, lfilter, spectrogram
slack_l, slack = 0.1, 1
cutoff = 50
L = 25
from scipy.io import wavfile
sr, x = wavfile.read('capriccio.wav')
x = x[:(L + slack) * sr, 0]
x = x
# sr = 44100
# x = np.random.normal(size=((L + slack) * sr,))
b, a = butter(2, 2 * cutoff / sr, btype='low') # Butterworth
# cutoff function
def f(t):
return (10000 - 1000 * np.clip(t, 0, 9) - 1000 * np.clip(t-19, 0, 0.8)) \
/ cutoff
# and its reciprocal
def fr(_, t):
return cutoff / (10000 - 1000 * t.clip(0, 9) - 1000 * (t-19).clip(0, 0.8))
# modulate time
# calculate upper end of td first
tdmax = integrate.quad(f, 0, L + slack_l, points=[9, 19, 19.8])[0]
span = (0, tdmax)
t = np.arange(x.size) / sr
tdinfo = integrate.solve_ivp(fr, span, np.zeros((1,)),
t_eval=np.arange(0, span[-1], 1 / sr),
vectorized=True)
td = tdinfo.y.ravel()
# modulate signal
xd = interpolate.interp1d(t, x)(td)
# and linearly filter
yd = lfilter(b, a, xd)
# modulate signal back to linear time
y = interpolate.interp1d(td, yd)(t[:-sr*slack])
# check
import pylab
xa, ya, z = spectrogram(y, sr)
pylab.pcolor(ya, xa, z, vmax=2**8, cmap='nipy_spectral')
pylab.savefig('tst.png')
wavfile.write('capriccio_vandalized.wav', sr, y.astype(np.int16))
Sample output:
Spectrogram of first 25 seconds of BWV 826 Capriccio filtered with a time varying lowpass implemented via time bending.
you can use scipy.fftpack.fftfreq and scipy.fftpack.rfft to set thresholds
fft = scipy.fftpack.fft(sound)
freqs = scipy.fftpack.fftfreq(sound.size, time_step)
for the time_step I did twice the sampling rate of the sound
fft[(freqs < 200)] = 0
this would set all set all frequencies less than 200 hz to zero
for the time varying cut off, I'd split the sound and apply the filters then. assuming the sound has a sampling rate of 44100, the 5000hz filter would start at sample 220500 (five seconds in)
10ksound = sound[:220500]
10kfreq = scipy.fftpack.fftreq(10ksound.size, time_step)
10kfft = scipy.fftpack.fft(10ksound)
10kfft[(10kfreqs < 10000)] = 0
then for the next filter:
5ksound = sound[220500:396900]
5kfreq = scipy.fftpack.fftreq(10ksound.size, time_step)
5kfft = scipy.fftpack.fft(10ksound)
5kfft[(5kfreqs < 5000)] = 0
etc
edit: to make it "sliding" or a gradual filter instead of piece wise, you could make the "pieces" much smaller and apply increasingly bigger frequency thresholds to the corresponding piece(5000 -> 5001 -> 5002)

Rolling Window, Dominant Frequency Numpy Accelerometer Data

Want to calculate dominant frequency, secondary dominant frequency from X,Y,Z accelerometer data stored in flat CSV files (million rows+) e.g.
data
I'm trying to use scipy although aware of numpy - either would do. I've converted my X, Y, Z to SMV format (single magnitude vector) and want to apply the fourier transform to this, and then get the frequencies using fftfreq - the bit that defeats me is the n and timestep. I have my sample rates, the hertz, and the size of rolling window I want to look at (10 rows of data) but not quite sure how to apply this to script below:
#The three-dimension data collected (X,Y,Z) were transformed into a
#single-dimensional Signal Magnitude Vector SMV (aka The Resultant)
#SMV = x2 + Y2 + Z2
X2 = X['X']*X['X']
Y2 = X['Y']*X['Y']
Z2 = X['Z']*X['Z']
#print X['X'].head(2) #Confirmed worked
#print X2.head(2) #Confirmed worked
combine = [X2,Y2,Z2, Y]
parent = pd.concat(combine, axis=1)
parent['ADD'] = parent.sum(axis=1) #Sum X2,Y2,Z2
sqr = np.sqrt(parent['ADD']) #Square Root of Sum Above
sqr.name = 'SMV'
combine2 = [sqr, Y] #Reduce Dataset to SMV and Class
parent2 = pd.concat(combine2, axis=1)
print parent2.head(4)
"************************* Begin Fourier ****************************"
from scipy import fftpack
X = fftpack.fft(sqr)
f_s = 80 #80 Hertz
samp = 1024 #samples per segment divided by 12.8 secs signal length
n = X.size
timestep = 10
freqs = fftpack.fftfreq(n, d=timestep)
Firstly you need to load your data into a numpy array (sorry i didn't quite follow your approach):
def load_data():
csvlist = []
times = []
with open('freq.csv') as f:
csvfile = csv.reader(f, delimiter=',')
for i, row in enumerate(csvfile):
timestamp = datetime.datetime.strptime(row[0],"%Y-%m-%d %H:%M:%S.%f")
times.append(timestamp)
csvlist.append(row[1:])
timestep = times[1]-times[0]
csvarr = numpy.array(csvlist, dtype=numpy.float32)
return timestep, csvarr
The may well be a better way to do this?
Then you need to calculate the magnitudes:
rms = numpy.sqrt(numpy.sum(data**2, axis=1))
And then the fourier analysis:
def fourier(timestep, data):
N = len(data)//2
freq = fftpack.fftfreq(len(data), d=timestep)[:N]
fft = fftpack.fft(data)[:N]
amp = numpy.abs(fft)/N
order = numpy.argsort(amp)[::-1]
return freq[order]
the return from this is a list of frequencies in decreasing order of importance.

Categories