How can I address large memory usage by MoviePy? - python

I am trying to make a video from a large number of images using MoviePy. The approach works fine for small numbers of images, but the process is killed for large numbers of images. At about 500 images added, the Python process is using all about half of the volatile memory available. There are many more images than that.
How should I address this? I want the processing to complete and I don't mind if the processing takes a bit longer, but it would be good if I could limit the memory and CPU usage in some way. With the current approach, the machine becomes almost unusable while processing.
The code is as follows:
import os
import time
from moviepy.editor import *
def ls_files(
path = "."
):
return([fileName for fileName in os.listdir(path) if os.path.isfile(
os.path.join(path, fileName)
)])
def main():
listOfFiles = ls_files()
listOfTileImageFiles = [fileName for fileName in listOfFiles \
if "_tile.png" in fileName
]
numberOfTiledImages = len(listOfTileImageFiles)
# Create a video clip for each image.
print("create video")
videoClips = []
imageDurations = []
for imageNumber in range(0, numberOfTiledImages):
imageFileName = str(imageNumber) + "_tile.png"
print("add image {fileName}".format(
fileName = imageFileName
))
imageClip = ImageClip(imageFileName)
duration = 0.1
videoClip = imageClip.set_duration(duration)
# Determine the image start time by calculating the sum of the durations
# of all previous images.
if imageNumber != 0:
videoStartTime = sum(imageDurations[0:imageNumber])
else:
videoStartTime = 0
videoClip = videoClip.set_start(videoStartTime)
videoClips.append(videoClip)
imageDurations.append(duration)
fullDuration = sum(imageDurations)
video = concatenate(videoClips)
video.write_videofile(
"video.mp4",
fps = 30,
codec = "mpeg4",
audio_codec = "libvorbis"
)
if __name__ == "__main__":
main()

If I understood correctly, you want to use the different images as the frames of your video.
In this case you should use ImageSequenceClip() (it's in the library, but not yet in the web docs, see here for the doc).
Basically, you just write
clip = ImageSequenceClip("some/directory/", fps=10)
clip.write_videofile("result.mp4")
And it will read the images in the directory in alpha-numerical order, while keeping only one frame at a time in the memory.
While I am at it, you can also provide a list of filenames or a list of numpy arrays to ImageSequenceClip.
Note that if you just want to transform images into a video, and not anything else fancy like adding titles or compositing with another video, then you can do it directly with ffmpeg. From memory the command should be:
ffmpeg -i *_tile.png -r 10 -o result.mp4

Related

Pygame.mixer.sndarray.make_sound not working

I am doing some simple DSP and playing the original and the filtered. I am using librosa to load the mp3 files in as arrays.
Here is the code of me loading the files in.
PATH = os.getcwd() + "/audio files"
to_load_array = os.listdir(PATH)
random_mp3 = to_load_array[randint(0, len(to_load_array) - 1)]
random_mp3_path = "audio files/" + random_mp3
data, sr = librosa.load(random_mp3_path, duration = MP3(random_mp3_path).info.length, mono = False, sr = 44100)
Else where in my program I have a GUI with a button that allows you to switch between playing the filtered version and and the unfiltered version. I am doing this by using different channels in pygame.mixer playing them both at the same time but alternating which one is being muted. I am using pygame.mixer because it allows for this simultaneous playing feature.
Currently I am filtering and then exporting the filtered file as an mp3 and then playing the mp3. This feels like too many I/O operations and I would like to speed up the program a little bit by playing the filtered and unfiltered arrays directly from within python.
I found the function pygame.mixer.sndarray.make_sound which is suppused to turn your array into a sound object that can be played on the mixer.play() function.
This is how I am using the function:
def setSoundObjects(self, original, filtered):
print('orignal data shape: ',original.shape)
print('format (From get_init): ', pygame.mixer.get_init()[2])
print("this is the original data: ", original)
original = numpy.int16(original * (2 ** 15))
filtered = numpy.int16(filtered * (2 ** 15))
print("this is the data after numpy conversion: ", original)
#set the sound objects
print("Number of channels: ", pygame.mixer.get_num_channels())
self.original = pygame.sndarray.make_sound(original)
self.filtered = pygame.sndarray.make_sound(filtered)
However, whenever I try to use it I keep getting this error:
self.original = pygame.mixer.Sound(array = original)
ValueError: Array depth must match number of mixer channels
here is the output from what I am printing to the terminal to try and debug this:
format (From get_init): 2
this is the original data: [[ 0. 0. 0. ... -0.00801174 -0.00784447
-0.01003712]
[ 0. 0. 0. ... -0.00695544 -0.00674583
-0.00865676]]
this is the data after numpy conversion: [[ 0 0 0 ... -262 -257 -328]
[ 0 0 0 ... -227 -221 -283]]
Number of channels: 8
I was trying the numpy conversion because of this post but it didn't help.
Any help would be greatly appreciated thank you!
let me know if I should add anything else to help clarify what im doing. I did not include the entire code because the program is quite large already.
This question is different than this question because I am already changing the format of the array. I have tried this solution and it does not work in my code. I get the same error if I try and pass the size = 32 parameter into the init.

File retention mechanism in a large data storage

recently I faced performance problem with mp4 files retention. I have kind a recorder which saves 1 min long mp4 files from multiple RTSP streams. Those files are stored on external drive in file tree like this:
./recordings/{camera_name}/{YYYY-MM-DD}/{HH-MM}.mp4
Apart from video files, there are many other files on this drive which are not considered (unless they have mp4 extension), as they took much less space.
Assumption of file retention is as follows. Every minute, python script that is responsible for recording, check for external drive fulfillment level. If the level is above 80%, it performs a scan of the whole drive, and look for .mp4 files. When scanning is done, it sorts a list of files by its creation date, and deletes the number of the oldest files which is equal to the cameras number.
The part of the code, which is responsible for files retention, is shown below.
total, used, free = shutil.disk_usage("/home")
used_percent = int(used / total * 100)
if used_percent > 80:
logging.info("SSD usage %s. Looking for the oldest files", used_percent)
try:
oldest_files = sorted(
(
os.path.join(dirname, filename)
for dirname, dirnames, filenames in os.walk('/home')
for filename in filenames
if filename.endswith(".mp4")
),
key=lambda fn: os.stat(fn).st_mtime,
)[:len(camera_devices)]
logging.info("Removing %s", oldest_files)
for oldest_file in oldest_files:
os.remove(oldest_file)
logging.info("%s removed", oldest_file)
except ValueError as e:
# no files to delete
pass
(/home is external drive mount point)
The problem is that this mechanism used to work as a charm, when I used 256 or 512 GB SSD. Now I have a need of larger space (more cameras and longer storage time), and it takes a lot of time to create files list on larger SSD (from 2 to 5 TB now and maybe 8 TB in the future). The scanning process takes a lot more than 1 min, what could be resolved by performing it more rarely, and extending the length of "to delete" files list. The real problem is, that the process uses a lot of CPU load (by I/O ops) itself. The performance drop is visible is the whole system. Other applications, like some simple computer vision algorithms, works slower, and CPU load can even cause kernel panic.
The HW I work on is Nvidia Jetson Nano and Xavier NX. Both devices have problem with performance as I described above.
The question is if you know some algorithms or out of the box software for file retention that will work on the case I described. Or maybe there is a way to rewrite my code, to let it be more reliable and perform?
EDIT:
I was able to lower os.walk() impact by limit space to check.Now I just scan /home/recordings and /home/recognition/ which also lower directory tree (for recursive scan). At the same time, I've added .jpg files checking, so now I look from both .mp4 and .jpg. Result is much better in this implementation.
However, I need further optimization. I prepared some test cases, and tested them on 1 TB drive which is 80% filled (media files mostly). I attached profiler results per case below.
#time_measure
def method6():
paths = [
"/home/recordings",
"/home/recognition",
"/home/recognition/marked_frames",
]
files = []
for path in paths:
files.extend((
os.path.join(dirname, filename)
for dirname, dirnames, filenames in os.walk(path)
for filename in filenames
if (filename.endswith(".mp4") or filename.endswith(".jpg")) and not os.path.islink(os.path.join(dirname, filename))
))
oldest_files = sorted(
files,
key=lambda fn: os.stat(fn).st_mtime,
)
print(oldest_files[:5])
#time_measure
def method7():
ext = [".mp4", ".jpg"]
paths = [
"/home/recordings/*/*/*",
"/home/recognition/*",
"/home/recognition/marked_frames/*",
]
files = []
for path in paths:
files.extend((file for file in glob(path) if not os.path.islink(file) and (file.endswith(".mp4") or file.endswith(".jpg"))))
oldest_files = sorted(files, key=lambda fn: os.stat(fn).st_mtime)
print(oldest_files[:5])
The original implementation on the same data set last ~100 s
EDIT2
#norok2 proposals comparation
I compared them with method6 and method7 from above. I tried several times with similar result.
Testing method7
['/home/recordings/35e68df5-44b1-5010-8d12-74b892c60136/2022-06-24/17-36-18.jpg', '/home/recordings/db33186d-3607-5055-85dd-7e5e3c46faba/2021-11-22/11-27-30.jpg', '/home/recordings/acce21a2-763d-56fe-980d-a85af1744b7a/2021-11-22/11-27-30.jpg', '/home/recordings/b97eb889-e050-5c82-8034-f52ae2d99c37/2021-11-22/11-28-23.jpg', '/home/recordings/01ae845c-b743-5b64-86f6-7f1db79b73ae/2021-11-22/11-28-23.jpg']
Took 24.73726773262024 s
_________________________
Testing find_oldest
['/home/recordings/35e68df5-44b1-5010-8d12-74b892c60136/2022-06-24/17-36-18.jpg', '/home/recordings/db33186d-3607-5055-85dd-7e5e3c46faba/2021-11-22/11-27-30.jpg', '/home/recordings/acce21a2-763d-56fe-980d-a85af1744b7a/2021-11-22/11-27-30.jpg', '/home/recordings/b97eb889-e050-5c82-8034-f52ae2d99c37/2021-11-22/11-28-23.jpg', '/home/recordings/01ae845c-b743-5b64-86f6-7f1db79b73ae/2021-11-22/11-28-23.jpg']
Took 34.355509757995605 s
_________________________
Testing find_oldest_cython
['/home/recordings/35e68df5-44b1-5010-8d12-74b892c60136/2022-06-24/17-36-18.jpg', '/home/recordings/db33186d-3607-5055-85dd-7e5e3c46faba/2021-11-22/11-27-30.jpg', '/home/recordings/acce21a2-763d-56fe-980d-a85af1744b7a/2021-11-22/11-27-30.jpg', '/home/recordings/b97eb889-e050-5c82-8034-f52ae2d99c37/2021-11-22/11-28-23.jpg', '/home/recordings/01ae845c-b743-5b64-86f6-7f1db79b73ae/2021-11-22/11-28-23.jpg']
Took 25.81963086128235 s
method7 (glob())
iglob()
Cython
You could get an extra few percent speed-up on top of your method7() with the following:
import os
import glob
def find_oldest(paths=("*",), exts=(".mp4", ".jpg"), k=5):
result = [
filename
for path in paths
for filename in glob.iglob(path)
if any(filename.endswith(ext) for ext in exts) and not os.path.islink(filename)]
mtime_idxs = sorted(
(os.stat(fn).st_mtime, i)
for i, fn in enumerate(result))
return [result[mtime_idxs[i][1]] for i in range(k)]
The main improvements are:
use iglob instead of glob -- while it may be of comparable speed, it takes significantly less memory which may help on low end machines
str.endswith() is done before the allegedly more expensive os.path.islink() which helps reducing the number of such calls due to shortcircuiting
an intermediate list with all the mtimes is produces to minimize the os.stat() calls
This can be sped up even further with Cython:
%%cython --cplus -c-O3 -c-march=native -a
import os
import glob
cpdef find_oldest_cy(paths=("*",), exts=(".mp4", ".jpg"), k=5):
result = []
for path in paths:
for filename in glob.iglob(path):
good_ext = False
for ext in exts:
if filename.endswith(ext):
good_ext = True
break
if good_ext and not os.path.islink(filename):
result.append(filename)
mtime_idxs = []
for i, fn in enumerate(result):
mtime_idxs.append((os.stat(fn).st_mtime, i))
mtime_idxs.sort()
return [result[mtime_idxs[i][1]] for i in range(k)]
My tests on the following files:
def gen_files(n, exts=("mp4", "jpg", "txt"), filename="somefile", content="content"):
for i in range(n):
ext = exts[i % len(exts)]
with open(f"{filename}{i}.{ext}", "w") as f:
f.write(content)
gen_files(10_000)
produces the following:
funcs = find_oldest_OP, find_oldest, find_oldest_cy
timings = []
base = funcs[0]()
for func in funcs:
res = func()
is_good = base == res
timed = %timeit -r 8 -n 4 -q -o func()
timing = timed.best * 1e3
timings.append(timing if is_good else None)
print(f"{func.__name__:>24} {is_good} {timing:10.3f} ms")
# find_oldest_OP True 81.074 ms
# find_oldest True 70.994 ms
# find_oldest_cy True 64.335 ms
find_oldest_OP is the following, based on method7() from OP:
def find_oldest_OP(paths=("*",), exts=(".mp4", ".jpg"), k=5):
files = []
for path in paths:
files.extend(
(file for file in glob.glob(path)
if not os.path.islink(file) and any(file.endswith(ext) for ext in exts)))
oldest_files = sorted(files, key=lambda fn: os.stat(fn).st_mtime)
return oldest_files[:k]
The Cython version seems to point to a ~25% reduction in execution time.
You could use the subprocess module to list all the mp4 files directly, without having to loop through all the files in the directory.
import subprocess as sb
oldest_files = sb.getoutput("dir /b /s .\home\*.mp4").split("\n")).sort(lambda fn: os.stat(fn).st_mtime,)[:len(camera_devices)]
A quick optimization would be not to bother checking file creation time and trusting the filename.
total, used, free = shutil.disk_usage("/home")
used_percent = int(used / total * 100)
if used_percent > 80:
logging.info("SSD usage %s. Looking for the oldest files", used_percent)
try:
files = []
for dirname, dirnames, filenames in os.walk('/home/recordings'):
for filename in filenames:
files.push((
name := os.path.join(dirname, filename),
datetime.strptime(
re.search(r'\d{4}-\d{2}-\d{2}\/\d{2}-\d{2}', name)[0],
"%Y-%m-%d/%H-%M"
))
oldest_files = files.sort(key=lambda e: e[1])[:len(camera_devices)]
logging.info("Removing %s", oldest_files)
for oldest_file in oldest_files:
os.remove(oldest_file)
# logging.info("%s removed", oldest_file)
logging.info("Removed")
except ValueError as e:
# no files to delete
pass

How can I match an audio clip inside an audio clip with Python? [duplicate]

I have a load of 3 hour MP3 files, and every ~15 minutes a distinct 1 second sound effect is played, which signals the beginning of a new chapter.
Is it possible to identify each time this sound effect is played, so I can note the time offsets?
The sound effect is similar every time, but because it's been encoded in a lossy file format, there will be a small amount of variation.
The time offsets will be stored in the ID3 Chapter Frame MetaData.
Example Source, where the sound effect plays twice.
ffmpeg -ss 0.9 -i source.mp3 -t 0.95 sample1.mp3 -acodec copy -y
Sample 1 (Spectrogram)
ffmpeg -ss 4.5 -i source.mp3 -t 0.95 sample2.mp3 -acodec copy -y
Sample 2 (Spectrogram)
I'm very new to audio processing, but my initial thought was to extract a sample of the 1 second sound effect, then use librosa in python to extract a floating point time series for both files, round the floating point numbers, and try to get a match.
import numpy
import librosa
print("Load files")
source_series, source_rate = librosa.load('source.mp3') # 3 hour file
sample_series, sample_rate = librosa.load('sample.mp3') # 1 second file
print("Round series")
source_series = numpy.around(source_series, decimals=5);
sample_series = numpy.around(sample_series, decimals=5);
print("Process series")
source_start = 0
sample_matching = 0
sample_length = len(sample_series)
for source_id, source_sample in enumerate(source_series):
if source_sample == sample_series[sample_matching]:
sample_matching += 1
if sample_matching >= sample_length:
print(float(source_start) / source_rate)
sample_matching = 0
elif sample_matching == 1:
source_start = source_id;
else:
sample_matching = 0
This does not work with the MP3 files above, but did with an MP4 version - where it was able to find the sample I extracted, but it was only that one sample (not all 12).
I should also note this script takes just over 1 minute to process the 3 hour file (which includes 237,426,624 samples). So I can imagine that some kind of averaging on every loop would cause this to take considerably longer.
Trying to directly match waveforms samples in the time domain is not a good idea. The mp3 signal will preserve the perceptual properties but it is quite likely the phases of the frequency components will be shifted so the sample values will not match.
You could try trying to match the volume envelopes of your effect and your sample.
This is less likely to be affected by the mp3 process.
First, normalise your sample so the embedded effects are the same level as your reference effect. Constructing new waveforms from the effect and the sample by using the average of the peak values over time frames that are just short enough to capture the relevant features. Better still use overlapping frames. Then use cross-correlation in the time domain.
If this does not work then you could analyze each frame using an FFT this gives you a feature vector for each frame. You then try to find matches of the sequence of features in your effect with the sample. Similar to https://stackoverflow.com/users/1967571/jonnor suggestion. MFCC is used in speech recognition but since you are not detecting speech FFT is probably OK.
I am assuming the effect playing by itself (no background noise) and it is added to the recording electronically (as opposed to being recorded via a microphone). If this is not the case the problem becomes more difficult.
This is an Audio Event Detection problem. If the sound is always the same and there are no other sounds at the same time, it can probably be solved with a Template Matching approach. At least if there is no other sounds with other meanings that sound similar.
The simplest kind of template matching is to compute the cross-correlation between your input signal and the template.
Cut out an example of the sound to detect (using Audacity). Take as much as possible, but avoid the start and end. Store this as .wav file
Load the .wav template using librosa.load()
Chop up the input file into a series of overlapping frames. Length should be same as your template. Can be done with librosa.util.frame
Iterate over the frames, and compute cross-correlation between frame and template using numpy.correlate.
High values of cross-correlation indicate a good match. A threshold can be applied in order to decide what is an event or not. And the frame number can be used to calculate the time of the event.
You should probably prepare some shorter test files which have both some examples of the sound to detect as well as other typical sounds.
If the volume of the recordings is inconsistent you'll want to normalize that before running detection.
If cross-correlation in the time-domain does not work, you can compute the melspectrogram or MFCC features and cross-correlate that. If this does not yield OK results either, a machine learning model can be trained using supervised learning, but this requires labeling a bunch of data as event/not-event.
To follow up on the answers by #jonnor and #paul-john-leonard, they are both correct, by using frames (FFT) I was able to do Audio Event Detection.
I've written up the full source code at:
https://github.com/craigfrancis/audio-detect
Some notes though:
To create the templates, I used ffmpeg:
ffmpeg -ss 13.15 -i source.mp4 -t 0.8 -acodec copy -y templates/01.mp4;
I decided to use librosa.core.stft, but I needed to make my own implementation of this stft function for the 3 hour file I'm analysing, as it's far too big to keep in memory.
When using stft I tried using a hop_length of 64 at first, rather than the default (512), as I assumed that would give me more data to work with... the theory might be true, but 64 was far too detailed, and caused it to fail most of the time.
I still have no idea how to get cross-correlation between frame and template to work (via numpy.correlate)... instead I took the results per frame (the 1025 buckets, not 1024, which I believe relate to the Hz frequencies found) and did a very simple average difference check, then ensured that average was above a certain value (my test case worked at 0.15, the main files I'm using this on required 0.55 - presumably because the main files had been compressed quite a bit more):
hz_score = abs(source[0:1025,x] - template[2][0:1025,y])
hz_score = sum(hz_score)/float(len(hz_score))
When checking these scores, it's really useful to show them on a graph. I often used something like the following:
import matplotlib.pyplot as plt
plt.figure(figsize=(30, 5))
plt.axhline(y=hz_match_required_start, color='y')
while x < source_length:
debug.append(hz_score)
if x == mark_frame:
plt.axvline(x=len(debug), ymin=0.1, ymax=1, color='r')
plt.plot(debug)
plt.show()
When you create the template, you need to trim off any leading silence (to avoid bad matching), and an extra ~5 frames (it seems that the compression / re-encoding process alters this)... likewise, remove the last 2 frames (I think the frames include a bit of data from their surroundings, where the last one in particular can be a bit off).
When you start finding a match, you might find it's ok for the first few frames, then it fails... you will probably need to try again a frame or two later. I found it easier having a process that supported multiple templates (slight variations on the sound), and would check their first testable (e.g. 6th) frame and if that matched, put them in a list of potential matches. Then, as it progressed on to the next frames of the source, it could compare it to the next frames of the template, until all frames in the template had been matched (or failed).
This might not be an answer, it's just where I got to before I start researching the answers by #jonnor and #paul-john-leonard.
I was looking at the Spectrograms you can get by using librosa stft and amplitude_to_db, and thinking that if I take the data that goes in to the graphs, with a bit of rounding, I could potentially find the 1 sound effect being played:
https://librosa.github.io/librosa/generated/librosa.display.specshow.html
The code I've written below kind of works; although it:
Does return quite a few false positives, which might be fixed by tweaking the parameters of what is considered a match.
I would need to replace the librosa functions with something that can parse, round, and do the match checks in one pass; as a 3 hour audio file causes python to run out of memory on a computer with 16GB of RAM after ~30 minutes before it even got to the rounding bit.
import sys
import numpy
import librosa
#--------------------------------------------------
if len(sys.argv) == 3:
source_path = sys.argv[1]
sample_path = sys.argv[2]
else:
print('Missing source and sample files as arguments');
sys.exit()
#--------------------------------------------------
print('Load files')
source_series, source_rate = librosa.load(source_path) # The 3 hour file
sample_series, sample_rate = librosa.load(sample_path) # The 1 second file
source_time_total = float(len(source_series) / source_rate);
#--------------------------------------------------
print('Parse Data')
source_data_raw = librosa.amplitude_to_db(abs(librosa.stft(source_series, hop_length=64)))
sample_data_raw = librosa.amplitude_to_db(abs(librosa.stft(sample_series, hop_length=64)))
sample_height = sample_data_raw.shape[0]
#--------------------------------------------------
print('Round Data') # Also switches X and Y indexes, so X becomes time.
def round_data(raw, height):
length = raw.shape[1]
data = [];
range_length = range(1, (length - 1))
range_height = range(1, (height - 1))
for x in range_length:
x_data = []
for y in range_height:
# neighbours = []
# for a in [(x - 1), x, (x + 1)]:
# for b in [(y - 1), y, (y + 1)]:
# neighbours.append(raw[b][a])
#
# neighbours = (sum(neighbours) / len(neighbours));
#
# x_data.append(round(((raw[y][x] + raw[y][x] + neighbours) / 3), 2))
x_data.append(round(raw[y][x], 2))
data.append(x_data)
return data
source_data = round_data(source_data_raw, sample_height)
sample_data = round_data(sample_data_raw, sample_height)
#--------------------------------------------------
sample_data = sample_data[50:268] # Temp: Crop the sample_data (318 to 218)
#--------------------------------------------------
source_length = len(source_data)
sample_length = len(sample_data)
sample_height -= 2;
source_timing = float(source_time_total / source_length);
#--------------------------------------------------
print('Process series')
hz_diff_match = 18 # For every comparison, how much of a difference is still considered a match - With the Source, using Sample 2, the maximum diff was 66.06, with an average of ~9.9
hz_match_required_switch = 30 # After matching "start" for X, drop to the lower "end" requirement
hz_match_required_start = 850 # Out of a maximum match value of 1023
hz_match_required_end = 650
hz_match_required = hz_match_required_start
source_start = 0
sample_matched = 0
x = 0;
while x < source_length:
hz_matched = 0
for y in range(0, sample_height):
diff = source_data[x][y] - sample_data[sample_matched][y];
if diff < 0:
diff = 0 - diff
if diff < hz_diff_match:
hz_matched += 1
# print(' {} Matches - {} # {}'.format(sample_matched, hz_matched, (x * source_timing)))
if hz_matched >= hz_match_required:
sample_matched += 1
if sample_matched >= sample_length:
print(' Found # {}'.format(source_start * source_timing))
sample_matched = 0 # Prep for next match
hz_match_required = hz_match_required_start
elif sample_matched == 1: # First match, record where we started
source_start = x;
if sample_matched > hz_match_required_switch:
hz_match_required = hz_match_required_end # Go to a weaker match requirement
elif sample_matched > 0:
# print(' Reset {} / {} # {}'.format(sample_matched, hz_matched, (source_start * source_timing)))
x = source_start # Matched something, so try again with x+1
sample_matched = 0 # Prep for next match
hz_match_required = hz_match_required_start
x += 1
#--------------------------------------------------

noise reduction on multiple wav file in python

I have used the code from
https://github.com/davidpraise45/Audio-Signal-Processing
to make a function to run it on an entire folder which contains around 100 wav files, but unable to get output cant understand what seems to be the problem.
def noise_reduction(dirName):
types = ('*.wav', '*.aif', '*.aiff', '*.mp3', '*.au', '*.ogg')
wav_file_list = []
for files in types:
wav_file_list.extend(glob.glob(os.path.join(dirName, files)))
wav_file_list = sorted(wav_file_list)
wav_file_list2 = []
for i, wavFile in enumerate(wav_file_list):
#samples = get_samples(wavFile,)
(Frequency, samples)=read(wavFile)
FourierTransformation = sp.fft(samples) # Calculating the fourier transformation of the signal
scale = sp.linspace(0, Frequency, len(samples))
b,a = signal.butter(5, 9800/(Frequency/2), btype='highpass') # ButterWorth filter 4350
filteredSignal = signal.lfilter(b,a,samples)
c,d = signal.butter(5, 200/(Frequency/4), btype='lowpass') # ButterWorth low-filter
newFilteredSignal = signal.lfilter(c,d,filteredSignal) # Applying the filter to the signal
write(New,wavFile, Frequency, newFilteredSignal)
noise_reduction("C:\\Users\\adity\\Desktop\\capstone\\hindi_dia_2\\sad\\sad_1.wav")
scipy.io.wavfile.read only supports the WAV format. It can not read aif, aiff, mp3, au, or ogg files.
You have four arguments to the scipy.io.wavfile.write function that only takes three. New,wavFile should most likely be os.path.join(os.path.dirname(wavFile), "New"+os.path.basename(wavFile)). This create a file with the New prefix in the same directory as the original. If you want to create them in the current directory instead use "New"+os.path.basename(wavFile).
You are passing a filename, not the name of a directory to your function:
noise_reduction("C:\\Users\\adity\\Desktop\\capstone\\hindi_dia_2\\sad\\sad_1.wav")
should likely be:
noise_reduction("C:\\Users\\adity\\Desktop\\capstone\\hindi_dia_2\\sad")
This causes the glob pattern to end up being: C:\\Users\\adity\\Desktop\\capstone\\hindi_dia_2\\sad\\sad_1.wav\\*.wav. This pattern has no matches unless sad_1.wav was a directory and it had files in it that ended with .wav.

How can i properly use random.sample in my code?

My directory has hundreds of images and text files(.png and .txt). what's special about them is that each image has its own matching txt file, for example im1.png has img1.txt, news_im2.png has news_im2.png etc.. What i want is some way to give it a parameter or percentage, let's say 40 where it randomly copy 40% of the images along with their correspondent texts to a new file, and the most important word here is randomely as if i do the test again i shouldn't get the same results. Ideally i should be able to take 2 kind of parameters(reminder that the first would be the % of each sample) the second being number of samples for example maybe i want my data in 3 different samples randomly not only 2, in this case it should be able to take destination directories path equal to the number of samples i want and spread them accordingly, for example i shouldn't find img_1 in 2 different samples.
What i have done so far is simply set up my method to copy them :
import glob, os, shutil
source_dir ='all_the_content/'
dest_dir = 'percentage_only/'
files = glob.iglob(os.path.join(source_dir, "*.png"))
for file in files:
if os.path.isfile(file):
shutil.copy2(file, dest_dir)
and the start of my code to set the random switching:
import os, shutil,random
my_pic_dict = {}
source_dir ='/home/michel/ubuntu/EAST/data_0.8/'
for element in os.listdir(source_dir):
if element.endswith('.png'):
my_pic_dict[element] = element.replace('.png', '.txt')
print(my_pic_dict)
print (len(my_pic_dict))
imgs_list = my_pic_dict.keys()
print(imgs_list)
hwo can i finalize it as i couldn't make random.sample work.
try this:
import random
import numpy as np
n_elem = 1000
n_samples = 4
percentages = [50,10,10,30]
indices = list(range(n_elem))
random.shuffle(indices)
elem_per_samples = [int(p*n_elem/100) for p in percentages]
limits = np.cumsum([0]+elem_per_samples)
samples = [indices[limits[i]:limits[i+1]] for i in range(n_samples)]

Categories