Estimate formants using LPC in Python - python

I'm new to signal processing (and numpy, scipy, and matlab for that matter). I'm trying to estimate vowel formants with LPC in Python by adapting this matlab code:
http://www.mathworks.com/help/signal/ug/formant-estimation-with-lpc-coefficients.html
Here is my code so far:
#!/usr/bin/env python
import sys
import numpy
import wave
import math
from scipy.signal import lfilter, hamming
from scikits.talkbox import lpc
"""
Estimate formants using LPC.
"""
def get_formants(file_path):
# Read from file.
spf = wave.open(file_path, 'r') # http://www.linguistics.ucla.edu/people/hayes/103/Charts/VChart/ae.wav
# Get file as numpy array.
x = spf.readframes(-1)
x = numpy.fromstring(x, 'Int16')
# Get Hamming window.
N = len(x)
w = numpy.hamming(N)
# Apply window and high pass filter.
x1 = x * w
x1 = lfilter([1., -0.63], 1, x1)
# Get LPC.
A, e, k = lpc(x1, 8)
# Get roots.
rts = numpy.roots(A)
rts = [r for r in rts if numpy.imag(r) >= 0]
# Get angles.
angz = numpy.arctan2(numpy.imag(rts), numpy.real(rts))
# Get frequencies.
Fs = spf.getframerate()
frqs = sorted(angz * (Fs / (2 * math.pi)))
return frqs
print get_formants(sys.argv[1])
Using this file as input, my script returns this list:
[682.18960189917243, 1886.3054773107765, 3518.8326108511073, 6524.8112723782951]
I didn't even get to the last steps where they filter the frequencies by bandwidth because the frequencies in the list aren't right. According to Praat, I should get something like this (this is the formant listing for the middle of the vowel):
Time_s F1_Hz F2_Hz F3_Hz F4_Hz
0.164969 731.914588 1737.980346 2115.510104 3191.775838
What am I doing wrong?
Thanks very much
UPDATE:
I changed this
x1 = lfilter([1., -0.63], 1, x1)
to
x1 = lfilter([1], [1., 0.63], x1)
as per Warren Weckesser's suggestion and am now getting
[631.44354635609318, 1815.8629524985781, 3421.8288991389031, 6667.5030877036006]
I feel like I'm missing something since F3 is very off.
UPDATE 2:
I realized that the order being passed to scikits.talkbox.lpc was off due to a difference in sampling frequency. Changed it to:
Fs = spf.getframerate()
ncoeff = 2 + Fs / 1000
A, e, k = lpc(x1, ncoeff)
Now I'm getting:
[257.86573127888488, 774.59006835496086, 1769.4624576002402, 2386.7093679399809, 3282.387975973973, 4413.0428174593926, 6060.8150432549655, 6503.3090645887842, 7266.5069407315023]
Much closer to Praat's estimation!

The problem had to do with the order being passed to the lpc function. 2 + fs / 1000 where fs is the sampling frequency is the rule of thumb according to:
http://www.phon.ucl.ac.uk/courses/spsci/matlab/lect10.html

I have not been able to get the results you expect, but I do notice two things which might cause some differences:
Your code uses [1, -0.63] where the MATLAB code from the link you provided has [1 0.63].
Your processing is being applied to the entire x vector at once instead of smaller segments of it (see where the MATLAB code does this: x = mtlb(I0:Iend); ).
Hope that helps.

There are at least two problems:
According to the link, the "pre-emphasis filter is a highpass all-pole (AR(1)) filter". The signs of the coefficients given there are correct: [1, 0.63]. If you use [1, -0.63], you get a lowpass filter.
You have the first two arguments to scipy.signal.lfilter reversed.
So, try changing this:
x1 = lfilter([1., -0.63], 1, x1)
to this:
x1 = lfilter([1.], [1., 0.63], x1)
I haven't tried running your code yet, so I don't know if those are the only problems.

Related

How can I get the start and end indices of a note in a volume graph?

I am trying to make a program, that tells me when a note has been pressed.
I have the following notes exported as a .wav file (The C Major Scale 4 times with different rhythms, dynamics and in different octaves):
I can get the volumes of my sound file using the following code:
from scipy.io import wavfile
def get_volume(file):
sr, data = wavfile.read(file)
if data.ndim > 1:
data = data[:, 0]
return data
volumes = get_volume("FILE")
Here are some information about the output:
Max: 27851
Min: -25664
Mean: -0.7569383391943734
A Sample from the array: [ -7987 -8615 -8983 -9107 -9019 -8750 -8324 -7752 -7033 -6156
-5115 -3920 -2610 -1245 106 1377 2520 3515 4364 5077
5659 6113 6441 6639 6708 6662 6518 6288 5962 5525
4963 4265 3420 2418 1264 -27 -1429 -2901 -4388 -5814
-7101 -8186 -9028 -9614 -9955 -10077 -10012 -9785 -9401 -8846]
And here is what I get when I plot the volumes array (x is the index, y is the volume):
I want to get the indices of the start and end of the notes like the ones in the image (Did it by hand not accurate):
When I looked at the data I realized, that it is a 1d array and I also noticed, that when a note gets louder or quiter it is not smooth. It is like a ZigZag, but there is still a trend. So basically I can't just get the gradients (slope) of each point. So I though about grouping notes into batches and getting the average gradient there and thus doing the calculations with it, like so:
def get_average_gradient(arr):
# Calculates average gradient
return sum([i - (sum(arr) / len(arr)) for i in arr]) / len(arr)
def get_note_start_end(arr_size, batch_size, arr):
# Finds start and end indices
ranges = []
curr_range = [0]
prev_slope = curr_slope = "NO SLOPE"
has_ended = False
for i, j in enumerate(arr):
if j > 0:
curr_slope = "INCREASING"
elif j < 0:
curr_slope = "DECREASING"
else:
curr_slope = "NO SLOPE"
if prev_slope == "DECREASING" and not has_ended:
if i == len(arr) - 1 or arr[i + 1] < 0:
if curr_slope != "DECREASING":
curr_range.append((i + 1) * batch_size + batch_size)
ranges.append(curr_range)
curr_range = [(i + 1) * batch_size + batch_size + 1]
has_ended = True
if has_ended and curr_slope == "INCREASING":
has_ended = False
prev_slope = curr_slope
ranges[-1][-1] = arr_size - 1
return ranges
def get_notes(batch_size, arr):
# Gets the gradients of the batches
out = []
for i in range(0, len(arr), batch_size):
if i + batch_size > len(arr):
gradient = get_average_gradient(arr[i:])
else:
gradient = get_average_gradient(arr[i: i+batch_size])
# print(gradient, i)
out.append(gradient)
return get_note_start_end(len(arr), batch_size, out)
notes = get_notes(128, volumes)
The problem with this is, that if the batch size is too small, then it returns the indices of small peaks, which aren't a note on their own. If the batch size is too big then the program misses the start and end indices.
I also tried to get the notes, by using the silence.
Here is the code I used:
from pydub import AudioSegment, silence
audio = intro = AudioSegment.from_wav("C - Major - Test.wav")
dBFS = audio.dBFS
notes = silence.detect_nonsilent(audio, min_silence_len=50, silence_thresh=dBFS-10)
This worked the best, but it still wasn't good enough. Here is what I got:
It some notes pretty well, but it wasn't able to identify notes accurately if the notes themselves didn't become very quite before a different one was played (Like in the second scale and in the fourth scale).
I have been thinking about this problem for days and I have basically tried most if not all of the good(?) ideas I had. I am new to analysing audio files. Maybe I am using the wrong data to do what I want to do. Maybe I need to use the frequency data (I tried getting it, but couldn't make sense of it)
Frequency code:
from scipy.fft import *
from scipy.io import wavfile
import matplotlib.pyplot as plt
def get_freq(file, start_time, end_time):
sr, data = wavfile.read(file)
if data.ndim > 1:
data = data[:, 0]
else:
pass
# Fourier Transform
N = len(data)
yf = rfft(data)
xf = rfftfreq(N, 1 / sr)
return xf, yf
FILE = "C - Major - Test.wav"
plt.plot(*get_freq(FILE, 0, 10))
plt.show()
And the frequency graph:
And here is the .wav file:
https://drive.google.com/file/d/1CERH-eovu20uhGoV1_O3B2Ph-4-uXpiP/view?usp=sharing
Any help is appreciated :)
think this is what you need:
first you convert negative numbers into positive ones and smooth the line to eliminate noise, to find the lower peaks yo work with the negative values.
from scipy.io import wavfile
import matplotlib.pyplot as plt
from scipy.signal import find_peaks
import numpy as np
from scipy.signal import savgol_filter
def get_volume(file):
sr, data = wavfile.read(file)
if data.ndim > 1:
data = data[:, 0]
return data
v1 = abs(get_volume("test.wav"))
#Smooth the curve
volumes=savgol_filter(v1,10000 , 3)
lv=volumes*-1
#find peaks
peaks,_ = find_peaks(volumes,distance=8000,prominence=300)
lpeaks,_= find_peaks(lv,distance=8000,prominence=300)
# plot them
plt.plot(volumes)
plt.plot(peaks,volumes[peaks],"x")
plt.plot(lpeaks,volumes[lpeaks],"o")
plt.plot(np.zeros_like(volumes), "--", color="gray")
plt.show()
Plot with your test file, x marks the high peaks and o the lower peaks
This article presents two python libraries (Aubio, librosa) to achieve what you need and includes examples of how to use them: How to Use Python to Detect Music Onsets by Lynn Zheng

Use Python lmfit with a variable number of parameters in function

I am trying to deconvolve complex gas chromatogram signals into individual gaussian signals. Here is an example, where the dotted line represents the signal I am trying to deconvolve.
I was able to write the code to do this using scipy.optimize.curve_fit; however, once applied to real data the results were unreliable. I believe being able to set bounds to my parameters will improve my results, so I am attempting to use lmfit, which allows this. I am having a problem getting lmfit to work with a variable number of parameters. The signals I am working with may have an arbitrary number of underlying gaussian components, so the number of parameters I need will vary. I found some hints here, but still can't figure it out...
Creating a python lmfit Model with arbitrary number of parameters
Here is the code I am currently working with. The code will run, but the parameter estimates do not change when the model is fit. Does anyone know how I can get my model to work?
import numpy as np
from collections import OrderedDict
from scipy.stats import norm
from lmfit import Parameters, Model
def add_peaks(x_range, *pars):
y = np.zeros(len(x_range))
for i in np.arange(0, len(pars), 3):
curve = norm.pdf(x_range, pars[i], pars[i+1]) * pars[i+2]
y = y + curve
return(y)
# generate some fake data
x_range = np.linspace(0, 100, 1000)
peaks = [50., 40., 60.]
a = norm.pdf(x_range, peaks[0], 5) * 2
b = norm.pdf(x_range, peaks[1], 1) * 0.1
c = norm.pdf(x_range, peaks[2], 1) * 0.1
fake = a + b + c
param_dict = OrderedDict()
for i in range(0, len(peaks)):
param_dict['pk' + str(i)] = peaks[i]
param_dict['wid' + str(i)] = 1.
param_dict['mult' + str(i)] = 1.
# In case, you'd like to see the plot of fake data
#y = add_peaks(x_range, *param_dict.values())
#plt.plot(x_range, y)
#plt.show()
# Initialize the model and fit
pmodel = Model(add_peaks)
params = pmodel.make_params()
for i in param_dict.keys():
params.add(i, value=param_dict[i])
result = pmodel.fit(fake, params=params, x_range=x_range)
print(result.fit_report())
I think you would be better off using lmfits ability to build composite model.
That is, with a single peak defined with
from scipy.stats import norm
def peak(x, amp, center, sigma):
return amp * norm.pdf(x, center, sigma)
(see also lmfit.models.GaussianModel), you can build a model with many peaks:
npeaks = 3
model = Model(peak, prefix='p1_')
for i in range(1, npeaks):
model = model + Model(peak, prefix='p%d_' % (i+1))
params = model.make_params()
Now model will be a sum of 3 Gaussian functions, and the params created for that model will have names like p1_amp, p1_center, p2_amp, ..., which you can add sensible initial values and/or bounds and/or constraints.
Given your example data, you could pass in initial values to make_params like
params = model.make_params(p1_amp=2.0, p1_center=50., p1_sigma=2,
p2_amp=0.2, p2_center=40., p2_sigma=2,
p3_amp=0.2, p3_center=60., p3_sigma=2)
result = model.fit(fake, params, x=x_range)
I was able to find a solution here:
https://lmfit.github.io/lmfit-py/builtin_models.html#example-3-fitting-multiple-peaks-and-using-prefixes
Building on the code above, the following accomplishes what I was trying to do...
from lmfit.models import GaussianModel
gauss1 = GaussianModel(prefix='g1_')
gauss2 = GaussianModel(prefix='g2_')
gauss3 = GaussianModel(prefix='g3_')
gauss4 = GaussianModel(prefix='g4_')
gauss5 = GaussianModel(prefix='g5_')
gauss = [gauss1, gauss2, gauss3, gauss4, gauss5]
prefixes = ['g1_', 'g2_', 'g3_', 'g4_', 'g5_']
mod = np.sum(gauss[0:len(peaks)])
pars = mod.make_params()
for i, prefix in zip(range(0, len(peaks)), prefixes[0:len(peaks)]):
pars[prefix + 'center'].set(peaks[i])
init = mod.eval(pars, x=x_range)
out = mod.fit(fake, pars, x=x_range)
print(out.fit_report(min_correl=0.5))
out.plot_fit()
plt.show()

matplotlib argrelmax doesn't find all maxes

I have a project where I'm sampling analog data and attempting to analyze with matplotlib. Currently, my analog data source is a potentiometer hooked up to a microcontroller, but that's not really relevant to the issue. Here's my code
arrayFront = RunningMean(array(dataFront), 15)
arrayRear = RunningMean(array(dataRear), 15)
x = linspace(0, len(arrayFront), len(arrayFront)) # Generate x axis
y = linspace(0, len(arrayRear), len(arrayRear)) # Generate x axis
min_vals_front = scipy.signal.argrelmin(arrayFront, order=2)[0] # Min
min_vals_rear = scipy.signal.argrelmin(arrayRear, order=2)[0] # Min
max_vals_front = scipy.signal.argrelmax(arrayFront, order=2)[0] # Max
max_vals_rear = scipy.signal.argrelmax(arrayRear, order=2)[0] # Max
maxvalfront = max(arrayFront[max_vals_front])
maxvalrear = max(arrayRear[max_vals_rear])
minvalfront = min(arrayFront[min_vals_front])
minvalrear = min(arrayRear[min_vals_rear])
plot(x, arrayFront, label="Front Pressures")
plot(y, arrayRear, label="Rear Pressures")
plot(x[min_vals_front], arrayFront[min_vals_front], "x")
plot(x[max_vals_front], arrayFront[max_vals_front], "o")
plot(y[min_vals_rear], arrayRear[min_vals_rear], "x")
plot(y[max_vals_rear], arrayRear[max_vals_rear], "o")
xlim(-25, len(arrayFront) + 25)
ylim(-1000, 7000)
legend(loc='upper left')
show()
dataFront and dataRear are python lists that hold the sampled data from 2 potentiometers. RunningMean is a function that calls:
convolve(x, ones((N,)) / N, mode='valid')
The problem is that the argrelmax (and min) functions don't always find all the maxes and mins. Sometimes it doesn't find ANY max or mins, and that causes me problems in this block of code
maxvalfront = max(arrayFront[max_vals_front])
maxvalrear = max(arrayRear[max_vals_rear])
minvalfront = min(arrayFront[min_vals_front])
minvalrear = min(arrayRear[min_vals_rear])
because the [min_vals_(blank)] variables are empty. Does anyone have any idea what is happening here, and what I can do to fix the problem? Thanks in advance.
Here's one of graphs of data where not all the maxes and mins are found:
signal.argrelmin is a thin wrapper around signal.argrelextrema with comparator=np.less. np.less(a, b) returns the truth value of a < b element-wise. Notice that np.less requires a to be strictly less than b for it to be True.
Your data has the same minimum value at a lot of neighboring locations. At the local minima, the inequality between local minimum and its neighbors does not satisfy a strictly less than relationship; instead it only satisfies a strictly less than or equal to relationship.
Therefore, to find these extrema use signal.argrelmin with comparator=np.less_equal. For example, using a snippet from your data:
import numpy as np
from scipy import signal
arrayRear = np.array([-624.59309896, -624.59309896, -624.59309896,
-625., -625., -625.,])
print(signal.argrelmin(arrayRear, order=2)[0])
# []
print(signal.argrelextrema(arrayRear, np.less_equal)[0])
# [0 1 3 4 5]
print(signal.argrelextrema(arrayRear, np.less_equal, order=2)[0])
# [0 3 4 5]

How to generate a fractal graph of a market in python

I wish to generate this in python:
http://classes.yale.edu/fractals/RandFrac/Market/TradingTime/Example1/Example1.html
but I'm incredibly stuck and new to this concept. Does anybody know of a library or gist for this?
Edit:
From what I can understand is that you need to split the fractal in 2 every time. So you have to calculate the y-axis point from the line between the two middle points. Then the two sections need to be formed according to the fractal?
Not 100% sure what you are asking, but as I understood from your comments, you want to generate a realistically looking stock market curve using the recursion described in the link.
As far as I understood the description in the linked page and some of the parent pages, it works like this:
You are given a start and an end point and a number of turning points in the form (t1, v1), (t2, v2), etc., for example start=(0,0), end=(1,1), turns = [(1/4, 1/2), (3/4, 1/4)], where ti and vi are fractions between 0 and 1.
You determine the actual turning points scaled to that interval between start and end and calculate the differences between those points, i.e. how far to go from pi to reach pi+1.
You shuffle those segments to introduce some randomness; when put together, they still cover exactly the same distance, i.e. they connect the original start and end point.
Repeat by recursively calling the function for the different segments between the new points.
Here's some Python code I just put together:
from __future__ import division
from random import shuffle
def make_graph(depth, graph, start, end, turns):
# add points to graph
graph.add(start)
graph.add(end)
if depth > 0:
# unpack input values
fromtime, fromvalue = start
totime, tovalue = end
# calcualte differences between points
diffs = []
last_time, last_val = fromtime, fromvalue
for t, v in turns:
new_time = fromtime + (totime - fromtime) * t
new_val = fromvalue + (tovalue - fromvalue) * v
diffs.append((new_time - last_time, new_val - last_val))
last_time, last_val = new_time, new_val
# add 'brownian motion' by reordering the segments
shuffle(diffs)
# calculate actual intermediate points and recurse
last = start
for segment in diffs:
p = last[0] + segment[0], last[1] + segment[1]
make_graph(depth - 1, graph, last, p, turns)
last = p
make_graph(depth - 1, graph, last, end, turns)
from matplotlib import pyplot
depth = 8
graph = set()
make_graph(depth, graph, (0, 0), (1, 1), [(1/9, 2/3), (5/9, 1/3)])
pyplot.plot(*zip(*sorted(graph)))
pyplot.show()
And here some example output:
I had a similar interest and developed a python3 library to do just what you want.
pip install fractalmarkets
See https://github.com/hyperstripe50/fractal-market-analysis/blob/master/README.md
Using #tobias_k solution and pandas, we can translate and scale the normalized fractal to a time-based one.
import arrow
import pandas as pd
import time
depth = 5
# the "geometry" of fractal
turns = [
(1 / 9, 0.60),
(5 / 9, 0.30),
(8 / 9, 0.70),
]
# select start / end time
t0 = arrow.now().floor("hours")
t1 = t0.shift(days=5)
start = (pd.to_datetime(t0._datetime), 1000)
end = (pd.to_datetime(t1._datetime), 2000)
# create a non-dimensionalized [0,0]x[1,1] Fractal
_start, _end = (0, 0), (1, 1)
graph = set()
make_graph(depth, graph, _start, _end, turns)
# just check graph length
assert len(graph) == (len(turns) + 1) ** depth + 1
# create a pandas dataframe from the normalized Fractal
df = pd.DataFrame(graph)
df.sort_values(0, inplace=True)
df.reset_index(drop=True, inplace=True)
# translate to real coordinates
X = pd.DataFrame(
data=[(start[0].timestamp(), start[1]), (end[0].timestamp(), end[1])]
).T
delta = X[1] - X[0]
Y = df.mul(delta) + X[0]
Y[0] = [*map(lambda x: pd.to_datetime(x, unit="s"), Y[0])]
# now resample and interpolate data according to *grid* size
grid ="min"
Z = Y.set_index(0)
A = Z.resample(grid).mean().interpolate()
# plot both graph to check errors
import matplotlib.pyplot as plt
ax = Z.plot()
A.plot(ax=ax)
plt.show()
showing both graphs:
and zooming to see interpolation and snap-to-grid differences:

IDL's INT_TABULATE - SciPy equivalent?

I am working on moving some code from IDL into python. One IDL call is to INT_TABULATE which performs integration on a fixed range.
The INT_TABULATED function integrates a tabulated set of data { xi , fi } on the closed interval [MIN(x) , MAX(x)], using a five-point Newton-Cotes integration formula.
Result = INT_TABULATED( X, F [, /DOUBLE] [, /SORT] )
Where result is the area under the curve.
IDL DOCS
My question is, does Numpy/SciPy offer a similar form of integration? I see that [scipy.integrate.newton_cotes] exists, but it appears to return "weights and error coefficient for Newton-Cotes integration instead of area".
Scipy does not provide such a high order integrator for tabulated data by default. The closest you have available without coding it yourself is scipy.integrate.simps, which uses a 3 point Newton-Cotes method.
If you simply want to get comparable integration precision, you could split your x and f arrays into 5 point chunks and integrate them one at a time, using the weights returned by scipy.integrate.newton_cotes doing something along the lines of:
def idl_tabulate(x, f, p=5) :
def newton_cotes(x, f) :
if x.shape[0] < 2 :
return 0
rn = (x.shape[0] - 1) * (x - x[0]) / (x[-1] - x[0])
weights = scipy.integrate.newton_cotes(rn)[0]
return (x[-1] - x[0]) / (x.shape[0] - 1) * np.dot(weights, f)
ret = 0
for idx in xrange(0, x.shape[0], p - 1) :
ret += newton_cotes(x[idx:idx + p], f[idx:idx + p])
return ret
This does 5-point Newton-Cotes on all intervals, except perhaps the last, where it will do a Newton-Cotes of the number of points remaining. Unfortunately, this will not give you the same results as IDL_TABULATE because the internal methods are different:
Scipy calculates the weights for points not equally spaced using what seems like a least-sqaures fit, I don't fully understand what is going on, but the code is pure python, you can find it in your Scipy installation in file scipy\integrate\quadrature.py.
INT_TABULATED always performs 5-point Newton-Cotes on equispaced data. If the data are not equispaced, it builds an equispaced grid, using a cubic spline to interpolate the values at those points. You can check the code here.
For the example in the INT_TABULATED docstring, which is suppossed to return 1.6271 using the original code, and have an exact solution of 1.6405, the above function returns:
>>> x = np.array([0.0, 0.12, 0.22, 0.32, 0.36, 0.40, 0.44, 0.54, 0.64,
... 0.70, 0.80])
>>> f = np.array([0.200000, 1.30973, 1.30524, 1.74339, 2.07490, 2.45600,
... 2.84299, 3.50730, 3.18194, 2.36302, 0.231964])
>>> idl_tabulate(x, f)
1.641998154242472

Categories