Using find_peaks in loop to find peak heights from data - python

This problem is about using scipy.signal.find_peaks for extracting mean peak height from data files efficiently. I am a beginner with Python (3.7), so I am not sure if I have written my code in the most optimal way, with regard to speed and code quality.
I have a set of measurement files containing one million data points (30MB) each. The graph of this data is a signal with peaks at regular intervals, and with noise. Also, the baseline and the amplitude of the signal vary across parts of the signal. I attached an image of an example. The signal can be much less clean.
My goal is to calculate the mean height of the peaks for each file. In order to do this, first I use find_peaks to locate all the peaks. Then, I loop over each peak location and detect the peak in a small interval around the peak, to make sure I get the local height of the peak.
I then put all these heights in numpy arrays and calculate their mean and standard deviation afterwards.
Here is a barebone version of my code, it is a bit long but I think that might also be because I am doing something wrong.
import numpy as np
from scipy.signal import find_peaks
# Allocate empty lists for values
mean_heights = []
std_heights = []
mean_baselines = []
std_baselines = []
temperatures = []
# Loop over several files, read them in and process data
for file in file_list:
temperatures.append(file)
# Load in data from a file of 30 MB
t_dat, x_dat = np.loadtxt(file, delimiter='\t', unpack=True)
# Find all peaks in this file
peaks, peak_properties = find_peaks(x_dat, prominence=prom, width=0)
# Calculate window size, make sure it is even
if round(len(t_dat)/len(peaks)) % 2 == 0:
n_points = len(t_dat) // len(peaks)
else:
n_points = len(t_dat) // len(peaks) + 1
t_slice = t_dat[-1] / len(t_dat)
# Allocate np arrays for storing heights
baseline_list = np.zeros(len(peaks) - 2)
height_list = np.zeros(len(peaks) - 2)
# Loop over all found peaks, and re-detect the peak in a window around the peak to be able
# to detect its local height without triggering to a baseline far away
for i in range(len(peaks) - 2):
# Making a window around a peak_properties
sub_t = t_dat[peaks[i+1] - n_points // 2: peaks[i+1] + n_points // 2]
sub_x = x_dat[peaks[i+1] - n_points // 2: peaks[i+1] + n_points // 2]
# Detect the peaks (2 version, specific to the application I have)
h_min = max(sub_x) - np.mean(sub_x)
_, baseline_props = find_peaks(
sub_x, prominence=h_min, distance=n_points - 1, width=0)
_, height_props = find_peaks(np.append(
min(sub_x) - 1, sub_x), prominence=h_min, distance=n_points - 1, width=0)
# Add the heights to the np arrays storing the heights
baseline_list[i] = baseline_props["prominences"]
height_list[i] = height_props["prominences"]
# Fill lists with values, taking the stdev and mean of the np arrays with the heights
mean_heights.append(np.mean(height_list))
std_heights.append(np.std(height_list))
mean_baselines.append(np.mean(baseline_list))
std_baselines.append(np.std(baseline_list))
It takes ~30 s to execute. Is this normal or too slow? If so, can it be optimised?

In the meantime I have improved the speed by getting rid of various inefficiencies, that I found by using the Python profiler. I will list the optimisations here ordered by significance for the speed:
Using pandas pd.read_csv() for I/O instead of np.loadtxt() cut off about 90% of the runtime. As also mentioned here, this saves a lot of time. This means changing this:
t_dat, x_dat = np.loadtxt(file, delimiter='\t', unpack=True)
to this:
data = pd.read_csv(file, delimiter = "\t", names=["t_dat", "x_dat"])
t_dat = data.values[:,0]
x_dat = data.values[:,1]
Removing redundant len() calls. I noticed that len() was called many times, and then noticed that this happened unnecessary. Changing this:
if round(len(t_dat) / len(peaks)) % 2 == 0:
n_points = int(len(t_dat) / len(peaks))
else:
n_points = int(len(t_dat) / len(peaks) + 1)
to this:
n_points = round(len(t_dat) / len(peaks))
if n_points % 2 != 0:
n_points += 1
proved to be also a significant improvement.
Lastly, a disproportionally component of the computational time (about 20%) was used by the built in Python functions min(), max() and sum(). Since I was using numpy arrays already, switching to the numpy equivalents for these functions, resulted in a 84% improvement on this part. This means for example changing max(sub_x) to sub_x.max().
These are all unrelated optimisations that I still think might be useful for Python beginner like myself, and they do help a lot.

Related

Change the melody of human speech using FFT and polynomial interpolation

I'm trying to do the following:
Extract the melody of me asking a question (word "Hey?" recorded to
wav) so I get a melody pattern that I can apply to any other
recorded/synthesized speech (basically how F0 changes in time).
Use polynomial interpolation (Lagrange?) so I get a function that describes the melody (approximately of course).
Apply the function to another recorded voice sample. (eg. word "Hey." so it's transformed to a question "Hey?", or transform the end of a sentence to sound like a question [eg. "Is it ok." => "Is it ok?"]). Voila, that's it.
What I have done? Where am I?
Firstly, I have dived into the math that stands behind the fft and signal processing (basics). I want to do it programatically so I decided to use python.
I performed the fft on the entire "Hey?" voice sample and got data in frequency domain (please don't mind y-axis units, I haven't normalized them)
So far so good. Then I decided to divide my signal into chunks so I get more clear frequency information - peaks and so on - this is a blind shot, me trying to grasp the idea of manipulating the frequency and analyzing the audio data. It gets me nowhere however, not in a direction I want, at least.
Now, if I took those peaks, got an interpolated function from them, and applied the function on another voice sample (a part of a voice sample, that is also ffted of course) and performed inversed fft I wouldn't get what I wanted, right?
I would only change the magnitude so it wouldn't affect the melody itself (I think so).
Then I used spec and pyin methods from librosa to extract the real F0-in-time - the melody of asking question "Hey?". And as we would expect, we can clearly see an increase in frequency value:
And a non-question statement looks like this - let's say it's moreless constant.
The same applies to a longer speech sample:
Now, I assume that I have blocks to build my algorithm/process but I still don't know how to assemble them beacause there are some blanks in my understanding of what's going on under the hood.
I consider that I need to find a way to map the F0-in-time curve from the spectrogram to the "pure" FFT data, get an interpolated function from it and then apply the function on another voice sample.
Is there any elegant (inelegant would be ok too) way to do this? I need to be pointed in a right direction beceause I can feel I'm close but I'm basically stuck.
The code that works behind the above charts is taken just from the librosa docs and other stackoverflow questions, it's just a draft/POC so please don't comment on style, if you could :)
fft in chunks:
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
import os
file = os.path.join("dir", "hej_n_nat.wav")
fs, signal = wavfile.read(file)
CHUNK = 1024
afft = np.abs(np.fft.fft(signal[0:CHUNK]))
freqs = np.linspace(0, fs, CHUNK)[0:int(fs / 2)]
spectrogram_chunk = freqs / np.amax(freqs * 1.0)
# Plot spectral analysis
plt.plot(freqs[0:250], afft[0:250])
plt.show()
spectrogram:
import librosa.display
import numpy as np
import matplotlib.pyplot as plt
import os
file = os.path.join("/path/to/dir", "hej_n_nat.wav")
y, sr = librosa.load(file, sr=44100)
f0, voiced_flag, voiced_probs = librosa.pyin(y, fmin=librosa.note_to_hz('C2'), fmax=librosa.note_to_hz('C7'))
times = librosa.times_like(f0)
D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max)
fig, ax = plt.subplots()
img = librosa.display.specshow(D, x_axis='time', y_axis='log', ax=ax)
ax.set(title='pYIN fundamental frequency estimation')
fig.colorbar(img, ax=ax, format="%+2.f dB")
ax.plot(times, f0, label='f0', color='cyan', linewidth=2)
ax.legend(loc='upper right')
plt.show()
Hints, questions and comments much appreciated.
The problem was that I didn't know how to modify the fundamental frequency (F0). By modifying it I mean modify F0 and its harmonics, as well.
The spectrograms in question show frequencies at certain points in time with power (dB) of certain frequency point.
Since I know which time bin holds which frequency from the melody (green line below) ...
....I need to compute a function that represents that green line so I can apply it to other speech samples.
So I need to use some interpolation method which takes as parameters the sample F0 function points.
One need to remember that degree of the polynomial should equal to the number of points. The example doesn't have that unfortunately, but the effect is somehow ok as for the prototype.
def _get_bin_nr(val, bins):
the_bin_no = np.nan
for b in range(0, bins.size - 1):
if bins[b] <= val < bins[b + 1]:
the_bin_no = b
elif val > bins[bins.size - 1]:
the_bin_no = bins.size - 1
return the_bin_no
def calculate_pattern_poly_coeff(file_name):
y_source, sr_source = librosa.load(os.path.join(ROOT_DIR, file_name), sr=sr)
f0_source, voiced_flag, voiced_probs = librosa.pyin(y_source, fmin=librosa.note_to_hz('C2'),
fmax=librosa.note_to_hz('C7'), pad_mode='constant',
center=True, frame_length=4096, hop_length=512, sr=sr_source)
all_freq_bins = librosa.core.fft_frequencies(sr=sr, n_fft=n_fft)
f0_freq_bins = list(filter(lambda x: np.isfinite(x), map(lambda val: _get_bin_nr(val, all_freq_bins), f0_source)))
return np.polynomial.polynomial.polyfit(np.arange(0, len(f0_freq_bins), 1), f0_freq_bins, 3)
def calculate_pattern_poly_func(coefficients):
return np.poly1d(coefficients)
Method calculate_pattern_poly_coeff calculates polynomial coefficients.
Using pythons poly1d lib I can compute function which can modify the speech. How to do that?
I just need to move up or down all values vertically at certain point in time.
for instance I want to move all frequencies at time bin 0,75 seconds up 3 times -> it means that frequency will be increased and the melody at that point will sound higher.
Code:
def transform(sentence_audio_sample, mode=None, show_spectrograms=False, frames_from_end_to_transform=12):
# cutting out silence
y_trimmed, idx = librosa.effects.trim(sentence_audio_sample, top_db=60, frame_length=256, hop_length=64)
stft_original = librosa.stft(y_trimmed, hop_length=hop_length, pad_mode='constant', center=True)
stft_original_roll = stft_original.copy()
rolled = stft_original_roll.copy()
source_frames_count = np.shape(stft_original_roll)[1]
sentence_ending_first_frame = source_frames_count - frames_from_end_to_transform
sentence_len = np.shape(stft_original_roll)[1]
for i in range(sentence_ending_first_frame + 1, sentence_len):
if mode == 'question':
by = int(_question_pattern(i) / 500)
elif mode == 'exclamation':
by = int(_exclamation_pattern(i) / 500)
else:
by = 0
rolled = _roll_column(rolled, i, by)
transformed_data = librosa.istft(rolled, hop_length=hop_length, center=True)
def _roll_column(two_d_array, column, shift):
two_d_array[:, column] = np.roll(two_d_array[:, column], shift)
return two_d_array
In this case I am simply rolling up or down frequencies referencing certain time bin.
This needs to be polished as it doesn't take into consideration an actual state of the transformed sample. It just rolls it up/down according to the factor calculated using the polynomial function computer earlier.
You can check full code of my project at github, "audio" package contains pattern calculator and audio transform algorithm described above.
Feel free to ask if something's unclear :)

Peak detection in unevenly spaced timeseries

I'm working with a dataset containing measures combined with a datetime like:
datetime value
2017-01-01 00:01:00,32.7
2017-01-01 00:03:00,37.8
2017-01-01 00:04:05,35.0
2017-01-01 00:05:37,101.1
2017-01-01 00:07:00,39.1
2017-01-01 00:09:00,38.9
I'm trying to detect and remove potential peaks that might appear, like 2017-01-01 00:05:37,101.1 measure.
Some things that I found so far:
This dataset has a time spacing that goes from 15 seconds all the way to 25 minutes, making it super uneven;
The width of the peaks cannot be determined beforehand
The height of the peaks clearly and significantly deviates from the other values
Normalization of the time step should only occur after the removal of the outliers since they would interfere with the results
It's "impossible" to making it even due to other anomalies (e.g, negative values, flat lines), even without them it would create wrong values due to the peaks;
find_peaks is expecting an evenly spaced timeseries therefore the previous solution didn't work for the irregular timeseries we have;
On that issue I forgot to mention the critical point that is unevenly spaced timeseries.
I've searched everywhere and I couldn't find anything. The implementation is going to be in Python but I'm willing to dig around other languages to get the logic.
I've posted this code on github to anyone that in the future have this problem, or similar.
After a lot of trial and error I think I created something that works. Using what #user58697 told me I managed to create a code that detects every peak between a threshold.
By using the logic that he/she explained if ((flow[i+1] - flow[i]) / (time[i+1] - time[i]) > threshold I've coded the following code:
Started by reading the .csv and parse the dates, followed by splitting into two numpy arrays:
dataset = pd.read_csv('https://raw.githubusercontent.com/MigasTigas/peak_removal/master/dataset_simple_example.csv', parse_dates=['date'])
dataset = dataset.sort_values(by=['date']).reset_index(drop=True).to_numpy() # Sort and convert to numpy array
# Split into 2 arrays
values = [float(i[1]) for i in dataset] # Flow values, in float
values = np.array(values)
dates = [i[0].to_pydatetime() for i in dataset]
dates = np.array(dates)
Then applied the (flow[i+1] - flow[i]) / (time[i+1] - time[i]) to the whole dataset:
flow = np.diff(values)
time = np.diff(dates).tolist()
time = np.divide(time, np.power(10, 9))
slopes = np.divide(flow, time) # (flow[i+1] - flow[i]) / (time[i+1] - time[i])
slopes = np.insert(slopes, 0, 0, axis=0) # Since we "lose" the first index, this one is 0, just for alignments
And finally to detect the peaks we reduced the data to rolling windows of x seconds each. That way we can detect them easily:
# ROLLING WINDOW
size = len(dataset)
rolling_window = []
rolling_window_indexes = []
RW = []
RWi = []
window_size = 240 # Seconds
dates = [i.to_pydatetime() for i in dataset['date']]
dates = np.array(dates)
# create the rollings windows
for line in range(size):
limit_stamp = dates[line] + datetime.timedelta(seconds=window_size)
for subline in range(line, size, 1):
if dates[subline] <= limit_stamp:
rolling_window.append(slopes[subline]) # Values of the slopes
rolling_window_indexes.append(subline) # Indexes of the respective values
else:
RW.append(rolling_window)
if line != size: # To prevent clearing the last rolling window
rolling_window = []
RWi.append(rolling_window_indexes)
if line != size:
rolling_window_indexes = []
break
else:
# To get the last rolling window since it breaks before append
RW.append(rolling_window)
RWi.append(rolling_window_indexes)
After getting all rolling windows we start the fun:
t = 0.3 # Threshold
peaks = []
for index, rollWin in enumerate(RW):
if rollWin[0] > t: # If the first value is greater of threshold
top = rollWin[0] # Sets as a possible peak
bottom = np.min(rollWin) # Finds the minimum of the peak
if bottom < -t: # If less than the negative threshold
bottomIndex = int(np.argmin(rollWin)) # Find it's index
for peak in range(0, bottomIndex, 1): # Appends all points between the first index of the rolling window until the bottomIndex
peaks.append(RWi[index][peak])
The idea behind this code is every peak has a rising and a falling, and if both are greater than the stated threshold then it's an outlier peak along with all peaks between them:
Where translated to the real dataset used, posted on github:

Matplotlib Magnitude_spectrum Units in Python for Comparing Guitar Strings

I'm using matplotlib's magnitude_spectrum to compare the tonal characteristics of guitar strings. Magnitude_spectrum shows the y axis as having units of "Magnitude (energy)". I use two different 'processes' to compare the FFT. Process 2 (for lack of a better description) is much easier to interpret- code & graphs below
My questions are:
In terms of units, what does "Magnitude (energy)" mean and how does it relate to dB?
Using #Process 2 (see code & graphs below), what type of units am I looking at, dB?
If #Process 2 is not dB, then what is the best way to scale it to dB?
My code below (simplified) shows an example of what I'm talking about/looking at.
import numpy as np
from scipy.io.wavfile import read
from pylab import plot
from pylab import plot, psd, magnitude_spectrum
import matplotlib.pyplot as plt
#Hello Signal!!!
(fs, x) = read('C:\Desktop\Spectral Work\EB_AB_1_2.wav')
#Remove silence out of beginning of signal with threshold of 1000
def indices(a, func):
#This allows to use the lambda function for equivalent of find() in matlab
return [i for (i, val) in enumerate(a) if func(val)]
#Make the signal smaller so it uses less resources
x_tiny = x[0:100000]
#threshold is 1000, 0 is calling the first index greater than 1000
thresh = indices(x_tiny, lambda y: y > 1000)[1]
# backs signal up 20 bins, so to not ignore the initial pluck sound...
thresh_start = thresh-20
#starts at threshstart ends at end of signal (-1 is just a referencing thing)
analysis_signal = x[thresh_start-1:]
#Split signal so it is 1 second long
one_sec = 1*fs
onesec = x[thresh_start-1:one_sec+thresh_start-1]
#process 1
(spectrum, freqs, _) = magnitude_spectrum(onesec, Fs=fs)
#process 2
spectrum1 = spectrum/len(spectrum)
I don't know how to bulk process on multiple .wav files so I run this code separately on a whole bunch of different .wav files and i put them into excel to compare. But for the sake of not looking at ugly graphs, I graphed it in Python. Here's what #process1 and #process2 look like when graphed:
Process 1
Process 2
Magnetude is just the absolute value of the frequency spectrum. As you have labelled in Process 1 "Energy" is a good way to think about it.
Both Process 1 and Process 2 are in the same units. The only difference is that the values in Process 2 has been divided by the total length of the array (a scalar, hence no change of units). Normally this happens as part of the FFT, but sometimes it does not (e.g. numpy.FFT doesn't include the divide by length).
The easiest way to scale it to dB is:
(spectrum, freqs, _) = magnitude_spectrum(onesec, Fs=fs, scale='dB')
If you wanted to do this yourself then you would need to do something like:
spectrum2 = 20*numpy.log10(spectrum)
**It is worth noting that I'm not sure if you should be applying the /len(spectrum) or not. I would suggest using the scale='dB' !!
To convert to dB, take the log of any non-zero spectrum magnitudes, and scale (scale to match a calibrated mic and sound source if available, or use an arbitrarily scale to make the levels look familiar otherwise), before plotting.
For zero magnitude values, perhaps just replace or clamp the log with whatever you want to be on the bottom of your log plot (certainly not negative-infinity).

python - combining argsort with masking to get nearest values in moving window

I have some code for calculating missing values in an image, based on neighbouring values in a 2D circular window. It also uses the values from one or more temporally-adjacent images at the same locations (i.e. the same 2D window shifted in the 3rd dimension).
For each position that is missing, I need to calculate the value based not necessarily on all the values available in the whole window, but only on the spatially-nearest n cells that do have values (in both images / Z-axis positions), where n is some value less than the total number of cells in the 2D window.
At the minute, it's much quicker to calculate for everything in the window, because my means of sorting to get the nearest n cells with data is the slowest part of the function as it has to be repeated each time even though the distances in terms of window coordinates do not change. I'm not sure this is necessary and feel I must be able to get the sorted distances once, and then mask those in the process of only selecting available cells.
Here's my code for selecting the data to use within a window of the gap cell location:
# radius will in reality be ~100
radius = 2
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
dist = np.sqrt(x**2 + y**2)
circle_template = dist > radius
# this will in reality be a very large 3 dimensional array
# representing daily images with some gaps, indicated by 0s
dataStack = np.zeros((2,5,5))
dataStack[1] = (np.random.random(25) * 100).reshape(dist.shape)
dataStack[0] = (np.random.random(25) * 100).reshape(dist.shape)
testdata = dataStack[1]
alternatedata = dataStack[0]
random_gap_locations = (np.random.random(25) * 30).reshape(dist.shape) > testdata
testdata[random_gap_locations] = 0
testdata[radius, radius] = 0
# in reality we will go through every gap (zero) location in the data
# for each image and for each gap use slicing to get a window of
# size (radius*2+1, radius*2+1) around it from each image, with the
# gap being at the centre i.e.
# testgaplocation = [radius, radius]
# and the variables testdata, alternatedata below will refer to these
# slices
locations_to_exclude = np.logical_or(circle_template, np.logical_or
(testdata==0, alternatedata==0))
# the places that are inside the circular mask and where both images
# have data
locations_to_include = ~locations_to_exclude
number_available = np.count_nonzero(locations_to_include)
# we only want to do the interpolation calculations from the nearest n
# locations that have data available, n will be ~100 in reality
number_required = 3
available_distances = dist[locations_to_include]
available_data = testdata[locations_to_include]
available_alternates = alternatedata[locations_to_include]
if number_available > number_required:
# In this case we need to find the closest number_required of elements, based
# on distances recorded in dist, from available_data and available_alternates
# Having to repeat this argsort for each gap cell calculation is slow and feels
# like it should be avoidable
sortedDistanceIndices = available_distances.argsort(kind = 'mergesort',axis=None)
requiredIndices = sortedDistanceIndices[0:number_required]
selected_data = np.take(available_data, requiredIndices)
selected_alternates = np.take(available_alternates , requiredIndices)
else:
# we just use available_data and available_alternates as they are...
# now do stuff with the selected data to calculate a value for the gap cell
This works, but over half of the total time of the function is taken in the argsort of the masked spatial distance data. (~900uS of a total 1.4mS - and this function will be running tens of billions of times, so this is an important difference!)
I am sure that I must be able to just do this argsort once outside of the function, when the spatial distance window is originally set up, and then include those sort indices in the masking, to get the first howManyToCalculate indices without having to re-do the sort. The answer might involve putting the various bits that we are extracting from, into a record array - but I can't figure out how, if so. Can anyone see how I can make this part of the process more efficient?
So you want to do the sorting outside of the loop:
sorted_dist_idcs = dist.argsort(kind='mergesort', axis=None)
Then using some variables from the original code, this is what I could come up with, though it still feels like a major round-trip..
loc_to_incl_sorted = locations_to_include.take(sorted_dist_idcs)
sorted_dist_idcs_to_incl = sorted_dist_idcs[loc_to_incl_sorted]
required_idcs = sorted_dist_idcs_to_incl[:number_required]
selected_data = testdata.take(required_idcs)
selected_alternates = alternatedata.take(required_idcs)
Note the required_idcs refer to locations in the testdata and not available_data as in the original code. And this snippet I used take for the purpose of conveniently indexing the flattened array.
#moarningsun - thanks for the comment and answer. These got me on the right track, but don't quite work for me when the gap is < radius from the edge of the data: in this case I use a window around the gap cell which is "trimmed" to the data bounds. In this situation the indices reflect the "full" window and thus can't be used to select cells from the bounded window.
Unfortunately I edited that part of my code out when I clarified the original question but it's turned out to be relevant.
I've realised now that if you use argsort again on the output of argsort then you get ranks; i.e. the position that each item would have when the overall array was sorted. We can safely mask these and then take the smallest number_required of them (and do this on a structured array to get the corresponding data at the same time).
This implies another sort within the loop, but in fact we can use partitioning rather than a full sort, because all we need is the smallest num_required items. If num_required is substantially less than the number of data items then this is much faster than doing the argsort.
For example with num_required = 80 and num_available = 15000 the full argsort takes ~900µs whereas argpartition followed by index and slice to get the first 80 takes ~110µs. We still need to do the argsort to get the ranks at the outset (rather than just partitioning based on distance) in order to get the stability of the mergesort, and thus get the "right one" when distance is not unique.
My code as shown below now runs in ~610uS on real data, including the actual calculations that aren't shown here. I'm happy with that now, but there seem to be several other apparently minor factors that can have an influence on the runtime that's hard to understand.
For example putting the circle_template in the structured array along with dist, ranks, and another field not shown here, doubles the runtime of the overall function (even if we don't access circle_template in the loop!). Even worse, using np.partition on the structured array with order=['ranks'] increases the overall function runtime by almost two orders of magnitude vs using np.argpartition as shown below!
# radius will in reality be ~100
radius = 2
y,x = np.ogrid[-radius:radius+1, -radius:radius+1]
dist = np.sqrt(x**2 + y**2)
circle_template = dist > radius
ranks = dist.argsort(axis=None,kind='mergesort').argsort().reshape(dist.shape)
diam = radius * 2 + 1
# putting circle_template in this array too doubles overall function runtime!
fullWindowArray = np.zeros((diam,diam),dtype=[('ranks',ranks.dtype.str),
('thisdata',dayDataStack.dtype.str),
('alternatedata',dayDataStack.dtype.str),
('dist',spatialDist.dtype.str)])
fullWindowArray['ranks'] = ranks
fullWindowArray['dist'] = dist
# this will in reality be a very large 3 dimensional array
# representing daily images with some gaps, indicated by 0s
dataStack = np.zeros((2,5,5))
dataStack[1] = (np.random.random(25) * 100).reshape(dist.shape)
dataStack[0] = (np.random.random(25) * 100).reshape(dist.shape)
testdata = dataStack[1]
alternatedata = dataStack[0]
random_gap_locations = (np.random.random(25) * 30).reshape(dist.shape) > testdata
testdata[random_gap_locations] = 0
testdata[radius, radius] = 0
# in reality we will loop here to go through every gap (zero) location in the data
# for each image
gapz, gapy, gapx = 1, radius, radius
desLeft, desRight = gapx - radius, gapx + radius+1
desTop, desBottom = gapy - radius, gapy + radius+1
extentB, extentR = dataStack.shape[1:]
# handle the case where the gap is < search radius from the edge of
# the data. If this is the case, we can't use the full
# diam * diam window
dataL = max(0, desLeft)
maskL = 0 if desLeft >= 0 else abs(dataL - desLeft)
dataT = max(0, desTop)
maskT = 0 if desTop >= 0 else abs(dataT - desTop)
dataR = min(desRight, extentR)
maskR = diam if desRight <= extentR else diam - (desRight - extentR)
dataB = min(desBottom,extentB)
maskB = diam if desBottom <= extentB else diam - (desBottom - extentB)
# get the slice that we will be working within
# ranks, dist and circle are already populated
boundedWindowArray = fullWindowArray[maskT:maskB,maskL:maskR]
boundedWindowArray['alternatedata'] = alternatedata[dataT:dataB, dataL:dataR]
boundedWindowArray['thisdata'] = testdata[dataT:dataB, dataL:dataR]
locations_to_exclude = np.logical_or(boundedWindowArray['circle_template'],
np.logical_or
(boundedWindowArray['thisdata']==0,
boundedWindowArray['alternatedata']==0))
# the places that are inside the circular mask and where both images
# have data
locations_to_include = ~locations_to_exclude
number_available = np.count_nonzero(locations_to_include)
# we only want to do the interpolation calculations from the nearest n
# locations that have data available, n will be ~100 in reality
number_required = 3
data_to_use = boundedWindowArray[locations_to_include]
if number_available > number_required:
# argpartition seems to be v fast when number_required is
# substantially < data_to_use.size
# But partition on the structured array itself with order=['ranks']
# is almost 2 orders of magnitude slower!
reqIndices = np.argpartition(data_to_use['ranks'],number_required)[:number_required]
data_to_use = np.take(data_to_use,reqIndices)
else:
# we just use available_data and available_alternates as they are...
pass
# now do stuff with the selected data to calculate a value for the gap cell

Python - Removing specific frequencies between a range

I have a Spectrogram:
I want to clean the spectrogram up, so I only capture the frequencies within a specific range (i.e. in this example, between 2627 - 3939) and remove all of the blocks that are below this frequency. My overall aim is to only be left with the 4 segments that are within this frequency range, and, can be identified.
Here is my code so far:
import wave, struct, numpy as np, matplotlib.mlab as mlab, pylab as pl
def wavToArr(wavefile):
w = wave.open(wavefile,"rb")
p = w.getparams()
s = w.readframes(p[3])
w.close()
sd = np.fromstring(s, np.int16)
return sd,p
def wavToSpec(wavefile,log=False,norm=False):
wavArr,wavParams = wavToArr(wavefile)
print wavParams
return mlab.specgram(wavArr, NFFT=256,Fs=wavParams[2],window=mlab.window_hanning,noverlap=128,sides='onesided',scale_by_freq=True)
wavArr,wavParams = wavToArr("4bats.wav")
Pxx, freqs, bins = wavToSpec("4bats.wav")
Pxx += 0.0001
freqs += (len(wavArr) / wavParams[2]) / 2.
hf=pl.figure(figsize=(12,12));
ax = hf.add_subplot(2,1,1);
#plot spectrogram as decibals
hm = ax.imshow(10*np.log10(Pxx),interpolation='nearest',origin='lower',aspect='auto')
hf.colorbar(hm)
ylcnt = len(ax.get_yticklabels())
ycnt = len(freqs)
ylstep = int(ycnt / ylcnt)
ax.set_yticklabels([ int(freqs[f]) for f in xrange(0,ycnt,ylstep) ])
pl.show()
The problem is, I don't know how to do this using Python. I know the ranges (2627 - 3939) but, would I iterate through the entire 2D-array and sum up all the blocks, or, for each block within the Spectrogram, calculate the frequency and if it's higher than the threshold, keep it, otherwise the values become 0.0?
If I sum up each of the bins, I get the following:
I need to keep these blocks, but, want to remove every other block apart from these.
I hope someone can help me!
Maybe you want something like:
Pxx[np.greater(np.sum(Pxx, axis=1), 2955), :] = 0.
or, I might have your axes switched, so also try:
Pxx[:, np.greater(np.sum(Pxx, axis=0), 2955)] = 0.
Or maybe you want something else... I find the question a bit unclear.

Categories