I've been using psd() to compute power spectral density over a .wav file. I've shown it to my supervisor and he doesn't want it averaged to compute the Pxx:
The |FFT(i)|^2 of each segment are averaged to compute Pxx
He suggested I use PSD but overlap it manually a frame at a time instead of passing in the whole array of data. I've attempted it and it looks like this:
def spec_draw(imag_array):
overlap_step = len(imag_array) / 128
temp = []
values = []
for x in range(0, len(imag_array), overlap_step-overlap_step/2):
try:
for i in range(0, overlap_step):
temp.append(imag_array[x+i])
except:
pass
values.append(psd(temp, sides='onesided'))
temp = []
print values
Where imag_array is an array of data from a wave file. I've sent it to him and he doesn't understand Python very well and since he can't run it, he can't debug it. Does this look correct?
Related
Hey all I have a set up seemingly random 2D data that I want to reorder. This is more for an image with specific values at each pixel but the concept will be the same.
I have large 2d array that looks very random, say:
x = 100
y = 120
np.random.random((x,y))
and I want to re-distribute the 2d matrix so that the maximum value is in the center and the values from the maximum surround it giving it sort of a gaussian fall off from the center.
small example:
output = [[0.0,0.5,1.0,1.0,1.0,0.5,0.0]
[0.0,1.0,1.0,1.5,1.0,0.5,0.0]
[0.5,1.0,1.5,2.0,1.5,1.0,0.5]
[0.0,1.0,1.0,1.5,1.0,0.5,0.0]
[0.0,0.5,1.0,1.0,1.0,0.5,0.0]]
I know it wont really be a gaussian but just trying to give a visualization of what I would like. I was thinking of sorting the 2d array into a list from max to min and then using that to create a new 2d array but Im not sure how to distribute the values down to fill the matrix how I want.
Thank you very much!
If anyone looks at this in the future and needs help, Here is some advice on how to do this effectively for a lot of data. Posted below is the code.
def datasort(inputarray,spot_in_x,spot_in_y):
#get the data read
center_of_y = spot_in_y
center_of_x = spot_in_x
M = len(inputarray[0])
N = len(inputarray)
l_list = list(itertools.chain(*inputarray)) #listed data
l_sorted = sorted(l_list,reverse=True) #sorted listed data
#Reorder
to_reorder = list(np.arange(0,len(l_sorted),1))
x = np.linspace(-1,1,M)
y = np.linspace(-1,1,N)
centerx = int(M/2 - center_of_x)*0.01
centery = int(N/2 - center_of_y)*0.01
[X,Y] = np.meshgrid(x,y)
R = np.sqrt((X+centerx)**2 + (Y+centery)**2)
R_list = list(itertools.chain(*R))
values = zip(R_list,to_reorder)
sortedvalues = sorted(values)
unzip = list(zip(*sortedvalues))
unzip2 = unzip[1]
l_reorder = zip(unzip2,l_sorted)
l_reorder = sorted(l_reorder)
l_unzip = list(zip(*l_reorder))
l_unzip2 = l_unzip[1]
sorted_list = np.reshape(l_unzip2,(N,M))
return(sorted_list)
This code basically takes your data and reorders it in a sorted list. Then zips it together with a list based on a circular distribution. Then using the zip and sort commands you can create the distribution of data you wish to have based on your distribution function, in my case its a circle that can be offset.
I'm trying to do the following:
Extract the melody of me asking a question (word "Hey?" recorded to
wav) so I get a melody pattern that I can apply to any other
recorded/synthesized speech (basically how F0 changes in time).
Use polynomial interpolation (Lagrange?) so I get a function that describes the melody (approximately of course).
Apply the function to another recorded voice sample. (eg. word "Hey." so it's transformed to a question "Hey?", or transform the end of a sentence to sound like a question [eg. "Is it ok." => "Is it ok?"]). Voila, that's it.
What I have done? Where am I?
Firstly, I have dived into the math that stands behind the fft and signal processing (basics). I want to do it programatically so I decided to use python.
I performed the fft on the entire "Hey?" voice sample and got data in frequency domain (please don't mind y-axis units, I haven't normalized them)
So far so good. Then I decided to divide my signal into chunks so I get more clear frequency information - peaks and so on - this is a blind shot, me trying to grasp the idea of manipulating the frequency and analyzing the audio data. It gets me nowhere however, not in a direction I want, at least.
Now, if I took those peaks, got an interpolated function from them, and applied the function on another voice sample (a part of a voice sample, that is also ffted of course) and performed inversed fft I wouldn't get what I wanted, right?
I would only change the magnitude so it wouldn't affect the melody itself (I think so).
Then I used spec and pyin methods from librosa to extract the real F0-in-time - the melody of asking question "Hey?". And as we would expect, we can clearly see an increase in frequency value:
And a non-question statement looks like this - let's say it's moreless constant.
The same applies to a longer speech sample:
Now, I assume that I have blocks to build my algorithm/process but I still don't know how to assemble them beacause there are some blanks in my understanding of what's going on under the hood.
I consider that I need to find a way to map the F0-in-time curve from the spectrogram to the "pure" FFT data, get an interpolated function from it and then apply the function on another voice sample.
Is there any elegant (inelegant would be ok too) way to do this? I need to be pointed in a right direction beceause I can feel I'm close but I'm basically stuck.
The code that works behind the above charts is taken just from the librosa docs and other stackoverflow questions, it's just a draft/POC so please don't comment on style, if you could :)
fft in chunks:
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
import os
file = os.path.join("dir", "hej_n_nat.wav")
fs, signal = wavfile.read(file)
CHUNK = 1024
afft = np.abs(np.fft.fft(signal[0:CHUNK]))
freqs = np.linspace(0, fs, CHUNK)[0:int(fs / 2)]
spectrogram_chunk = freqs / np.amax(freqs * 1.0)
# Plot spectral analysis
plt.plot(freqs[0:250], afft[0:250])
plt.show()
spectrogram:
import librosa.display
import numpy as np
import matplotlib.pyplot as plt
import os
file = os.path.join("/path/to/dir", "hej_n_nat.wav")
y, sr = librosa.load(file, sr=44100)
f0, voiced_flag, voiced_probs = librosa.pyin(y, fmin=librosa.note_to_hz('C2'), fmax=librosa.note_to_hz('C7'))
times = librosa.times_like(f0)
D = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max)
fig, ax = plt.subplots()
img = librosa.display.specshow(D, x_axis='time', y_axis='log', ax=ax)
ax.set(title='pYIN fundamental frequency estimation')
fig.colorbar(img, ax=ax, format="%+2.f dB")
ax.plot(times, f0, label='f0', color='cyan', linewidth=2)
ax.legend(loc='upper right')
plt.show()
Hints, questions and comments much appreciated.
The problem was that I didn't know how to modify the fundamental frequency (F0). By modifying it I mean modify F0 and its harmonics, as well.
The spectrograms in question show frequencies at certain points in time with power (dB) of certain frequency point.
Since I know which time bin holds which frequency from the melody (green line below) ...
....I need to compute a function that represents that green line so I can apply it to other speech samples.
So I need to use some interpolation method which takes as parameters the sample F0 function points.
One need to remember that degree of the polynomial should equal to the number of points. The example doesn't have that unfortunately, but the effect is somehow ok as for the prototype.
def _get_bin_nr(val, bins):
the_bin_no = np.nan
for b in range(0, bins.size - 1):
if bins[b] <= val < bins[b + 1]:
the_bin_no = b
elif val > bins[bins.size - 1]:
the_bin_no = bins.size - 1
return the_bin_no
def calculate_pattern_poly_coeff(file_name):
y_source, sr_source = librosa.load(os.path.join(ROOT_DIR, file_name), sr=sr)
f0_source, voiced_flag, voiced_probs = librosa.pyin(y_source, fmin=librosa.note_to_hz('C2'),
fmax=librosa.note_to_hz('C7'), pad_mode='constant',
center=True, frame_length=4096, hop_length=512, sr=sr_source)
all_freq_bins = librosa.core.fft_frequencies(sr=sr, n_fft=n_fft)
f0_freq_bins = list(filter(lambda x: np.isfinite(x), map(lambda val: _get_bin_nr(val, all_freq_bins), f0_source)))
return np.polynomial.polynomial.polyfit(np.arange(0, len(f0_freq_bins), 1), f0_freq_bins, 3)
def calculate_pattern_poly_func(coefficients):
return np.poly1d(coefficients)
Method calculate_pattern_poly_coeff calculates polynomial coefficients.
Using pythons poly1d lib I can compute function which can modify the speech. How to do that?
I just need to move up or down all values vertically at certain point in time.
for instance I want to move all frequencies at time bin 0,75 seconds up 3 times -> it means that frequency will be increased and the melody at that point will sound higher.
Code:
def transform(sentence_audio_sample, mode=None, show_spectrograms=False, frames_from_end_to_transform=12):
# cutting out silence
y_trimmed, idx = librosa.effects.trim(sentence_audio_sample, top_db=60, frame_length=256, hop_length=64)
stft_original = librosa.stft(y_trimmed, hop_length=hop_length, pad_mode='constant', center=True)
stft_original_roll = stft_original.copy()
rolled = stft_original_roll.copy()
source_frames_count = np.shape(stft_original_roll)[1]
sentence_ending_first_frame = source_frames_count - frames_from_end_to_transform
sentence_len = np.shape(stft_original_roll)[1]
for i in range(sentence_ending_first_frame + 1, sentence_len):
if mode == 'question':
by = int(_question_pattern(i) / 500)
elif mode == 'exclamation':
by = int(_exclamation_pattern(i) / 500)
else:
by = 0
rolled = _roll_column(rolled, i, by)
transformed_data = librosa.istft(rolled, hop_length=hop_length, center=True)
def _roll_column(two_d_array, column, shift):
two_d_array[:, column] = np.roll(two_d_array[:, column], shift)
return two_d_array
In this case I am simply rolling up or down frequencies referencing certain time bin.
This needs to be polished as it doesn't take into consideration an actual state of the transformed sample. It just rolls it up/down according to the factor calculated using the polynomial function computer earlier.
You can check full code of my project at github, "audio" package contains pattern calculator and audio transform algorithm described above.
Feel free to ask if something's unclear :)
I'm a musician and I'm making a script that takes a wave file and snaps each of its frequencies from the fourier transforms to the nearest musical harmonic. Thanks to help from another question I posted here, that part works, but what I need to do now is make it so throughout the sound wave it tunes one frequency at a time so that at the beginning of the output it sounds the same as the input, and by the end it sounds like an instrument.
If I just wanted them to fade into each other then that would be easy, I could just crossfade in audacity or take a weighted average of the original fourier transform and the output one, but what I want to do instead is tune one frequency at a time. This means that at 50% of the way through the output, 50% of the frequencies have been snapped to the nearest harmonic and the other 50% are untouched. How could I accomplish this without computing every single sample of the output individually?
Also, I was thinking of reducing the MAX_HARMONIC over time as well but it would have a similar problem.
This is the sample I'm testing with (rename it to missile.wav):
https://my.mixtape.moe/iltlos.wav
Here is the script so far:
import struct
import wave
import numpy as np
# import data from wave
wav_file = wave.open("missile.wav", 'r')
num_samples = wav_file.getnframes()
sampling_rate = wav_file.getframerate() / 2
data = wav_file.readframes(num_samples)
wav_file.close()
data = struct.unpack('{n}h'.format(n=num_samples), data)
data = np.array(data)
# fast fourier transform makes an array of the frequencies of sine waves that comprise the sound
data_fft = np.fft.rfft(data)
# the higher MAX_HARMONIC is, the more it sounds like the original,
# the lower it is, the more it sounds like an instrument
MAX_HARMONIC = 2
# generate list of ratios that can be used for tuning (not octave reduced)
valid_ratios = []
for i in range(1, MAX_HARMONIC + 1):
for j in range(1, MAX_HARMONIC + 1):
if i % 2 != 0 and j % 2 != 0:
valid_ratios.append(i/float(j))
valid_ratios.append(j/float(i))
# remove dupes
valid_ratios = list(set(valid_ratios))
# find all the frequencies with the valid ratios
valid_frequencies = []
multiple = 2
while(multiple < num_samples / 2):
multiple *= 2
for ratio in valid_ratios:
frequency = ratio * multiple
if frequency < num_samples / 2:
valid_frequencies.append(frequency)
# remove dupes and sort and turn into a numpy array
valid_frequencies = np.sort(np.array(list(set(valid_frequencies))))
# bin the data_fft into the nearest valid frequency
valid_frequencies = valid_frequencies.astype(np.int64)
boundaries = np.concatenate([[0], np.round(np.sqrt(0.25 + valid_frequencies[:-1] * valid_frequencies[1:])).astype(np.int64)])
select = np.abs(data_fft) > 1
filtered_data_fft = np.zeros_like(data_fft)
filtered_data_fft[valid_frequencies] = np.add.reduceat(np.where(select, data_fft, 0), boundaries)
# do the inverse fourier transform to get a sound wave back
recovered_signal = np.fft.irfft(filtered_data_fft)
# write sound wave to wave file
comptype="NONE"
compname="not compressed"
nchannels=1
sampwidth=2
wav_file=wave.open("missile_output.wav", 'w')
wav_file.setparams((nchannels, sampwidth, int(sampling_rate), num_samples, comptype, compname))
for s in recovered_signal:
wav_file.writeframes(struct.pack('h', s))
wav_file.close()
I am trying to create a training data file which is structured as follows:
[Rows = Samples, Columns = features]
So if I have 100 samples and 2 features the shape of my np.array would be (100,2).
The list bellow contains path-strings to the .nrrd 3D sample patch-data files.
['/Users/FK/Documents/image/0128/subject1F_200.nrrd',
'/Users/FK/Documents/image/0128/subject2F_201.nrrd']
This is the code I have so far:
training_file = []
# For each sample in my image folder
for patches in dir_0128_list:
# Reads the 64x64x64 numpy array
data, options = nrrd.read(patches)
# Calculates the median and sum of the 3D array file. 2 Features per sample
f_median = np.median(data)
training_file.append(f_median)
f_sum = np.sum(data)
training_file.append(f_sum)
# Calculates a numpy array with the following shape (169,) containing 169 features per sample.
f_mof = my_own_function(data)
training_file.append(f_mof)
training_file = np.array((training_file), dtype=np.float32)
# training_file = np.column_stack((core_training_list))
If I don't use the np.column_stack function I get a (173,1) matrix. (1,173) if I run the function. In this scenario it should have a (2,171) shape.
I want to calculate the sum and median and append it to an list or numpy array column wise. At the end of the for loop I want to jump 1 row down and append the 2 features column wise for the second sample and so on...
Very simple solution
Instead of
f_median = np.median(data)
training_file.append(f_median)
f_sum = np.sum(data)
training_file.append(f_sum)
you could do do
training_file.append((np.median(data), np.sum(data)))
slightly longer solution
Then you would still have 1 piece of consecutive code that is not easy to reuse and test indicidually.
I would structure the different parts of the script
Iterate over the files to read the patches
Calculate the mean and sum
Aggregate to the requested format
Read the patches
def read_patches(files):
for file in files:
yield nrrd.read(file)
make a generator yielding the patches info
Calculate
def parse_patch(patch):
data, options = patch
return np.median(data), np.sum(data)
Putting it together
from pathlib import Path
file_dir = Path(<my_filedir>)
files = file_dir.glob('*.nrrd')
patches = read_patches(files)
training_file = np.array([parse_patch(patch) for patch in patches], dtype=np.float32)
This might look convoluted, but it allows for easy testing of each of the sub-blocks
I am using the random number routines in python in the following code in order to create a noise signal.
res = 10
# Add noise to each X bin accross the signal
X = np.arange(-600,600,res)
for i in range(10000):
noise = [random.uniform(-2,2) for i in xrange(len(X))]
# custom module to save output of X and noise to .fits file
wp.save_fits('test10000', X, noise)
plt.plot(V, I)
plt.show()
In this example I am generate 10,000 'noise.fits' files, that I then wish to co-add together in order to show the expected 1/sqrt(N) dependence of the stacked noise root-mean-square (rms) as a function of the number of objects co-added.
My problem is that the rms follows this dependancy up until ~1000 objects, at which point it deviates upwards, suggesting that the random number generator.
Is there a routine or way to structure the code which will avoid or minimise this repetition? (Ideally with the number as a float in between a max and min value >1 and <-1)?
Here is the output of the co-adding code as well as the code pasted at the bottom for reference.
If I use the module random.random() the result is worse.
Here is my code which adds the noise signal files together, averaging over the number of objects.
import os
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
import glob
rms_arr =[]
#vel_w_arr = []
filelist = glob.glob('/Users/thbrown/Documents/HI_stacking/mockcat/testing/test10000/M*.fits')
filelist.sort()
for i in (filelist[:]):
print(i)
#open an existing FITS file
hdulist = fits.open(str(i))
# assuming the first extension is the table we assign data to record array
tbdata = hdulist[1].data
#index = np.arange(len(filelist))
# Access the signal column
noise = tbdata.field(1)
# access the vel column
X = tbdata.field(0)
if i == filelist[0]:
stack = np.zeros(len(noise))
tot_rms = 0
#print len(stack)
# sum signal in loop
stack = (stack + noise)
rms = np.std(stack)
rms_arr = np.append(rms_arr, rms)
numgal = np.arange(1, np.size(filelist)+1)
avg_rms = rms_arr / numgal