I've got Python script for merge .wav files, based on list with paths.
It's based on this code
import wave
infiles = ["sound_1.wav", "sound_2.wav"]
outfile = "sounds.wav"
data= []
for infile in infiles:
w = wave.open(infile, 'rb')
data.append( [w.getparams(), w.readframes(w.getnframes())] )
w.close()
output = wave.open(outfile, 'wb')
output.setparams(data[0][0])
output.writeframes(data[0][1])
output.writeframes(data[1][1])
output.close()
from this topic
How to join two wav files using python?
But I realized, It takes too much time to generate file. In fact I don't need to store merged data as the file on physical disk.
So question is, is there any way to read .wav files merge them and play it just from RAM memory?
EDIT: I forgot to specify, that I need to play it using Raspberry pi 3. I tried to use PyAudio which on my laptop works fine, but when I tried it in RPI the sound is slowly and crackling (I used this example https://people.csail.mit.edu/hubert/pyaudio/docs/#example-blocking-mode-audio-i-o)
Related
I am more or less following the code below to merge two audio files. It mostly works, where audio segment can export both the original files and the combined file to a folder. These play fine in finder (Mac). However, when brought into a music app like Ableton, the waveform is distorted and sounds like digital garbage. I have a feeling this is because this code is messing with the wav header.
I have also noted the combined sound is showing a bitrate of 32 in the finder file info, whereas I am specifically outputting it as bitrate='24'
Any theories?
from pydub import AudioSegment
sound1 = AudioSegment.from_file("1.wav", format="wav")
sound2 = AudioSegment.from_file("2.wav", format="wav")
# Overlay sound2 over sound1 at position 0
overlay = sound1.overlay(sound2, position=0)
# simple export
file_handle = overlay.export("output.wav", format="wav", bitrate='24')
Note to others, I solved this by moving to using Sox (or PySox) instead of AudioSegment, which seams to work much more reliably with all the features I was looking for.
The Question
I want to load an audio file of any type (mp3, m4a, flac, etc) and write it to an output stream.
I tried using pydub, but it loads the entire file at once which takes forever and runs out of memory easily.
I also tried using python-vlc, but it's been unreliable and too much of a black box.
So, how can I open large audio files chunk-by-chunk for streaming?
Edit #1
I found half of a solution here, but I'll need to do more research for the other half.
TL;DR: Use subprocess and ffmpeg to convert the file to wav data, and pipe that data into np.frombuffer. The problem is, the subprocess still has to finish before frombuffer is used.
...unless it's possible to have the pipe written to on 1 thread while np reads it from another thread, which I haven't tested yet. For now, this problem is not solved.
I think the python package https://github.com/irmen/pyminiaudio can be of helpful. You can stream an audio file like this
import miniaudio
audio_path = "my_audio_file.mp3"
target_sampling_rate = 44100 #the input audio will be resampled a this sampling rate
n_channels = 1 #either 1 or 2
waveform_duration = 30 #in seconds
offset = 15 #this means that we read only in the interval [15s, duration of file]
waveform_generator = miniaudio.stream_file(
filename = audio_path,
sample_rate = target_sampling_rate,
seek_frame = int(offset * target_sampling_rate),
frames_to_read = int(waveform_duration * target_sampling_rate),
output_format = miniaudio.SampleFormat.FLOAT32,
nchannels = n_channels)
for waveform in waveform_generator:
#do something with the waveform....
I know for sure that this works on mp3, ogg, wav, flac but for some reason it does not on mp4/acc and I am actually looking for a way to read mp4/acc
I'm processing a video file and decided to split it up into equal chunks for parallel processing with each chunk running on its own process. I generate this series of video files that I want to connect together to make the original video.
I'm wondering what's the most efficient way of stringing these videos together without having to append frame by frame? (and ideally deleting the video files after they are read so I'm only left with one big video).
I wanted a programmatic solution oppose to a command. I found moviepy very useful for concatenating videos (its based on ffmpeg). Natsort is very useful for organizing the files by numerical order.
from moviepy.editor import VideoFileClip, concatenate_videoclips
from natsort import natsorted
#path is path to folder of videos
def concatVideos(path) :
currentVideo = None
#List all files in the directory and read points from text files one by one
for filePath in natsorted(os.listdir(path)):
if filePath.endswith(".mov"):
if currentVideo == None:
currentVideo = VideoFileClip(path + filePath)
continue
video_2 = VideoFileClip(path+filePath)
currentVideo = concatenate_videoclips([currentVideo,video_2])
currentVideo.write_videofile("export".mp4")
I have a stream of PCM audio frames coming to my python code .Is there way to write frame in a way that appends to an existing .wav file. What i have tried is i am taking 2 wav files . From 1 wav file i am reading the data and writing to a existing wav file
import numpy
import wave
import scipy.io.wavfile
with open('testing_data.wav', 'rb') as fd:
contents = fd.read()
contents1=bytearray(contents)
numpy_data = numpy.array(contents1, dtype=float)
scipy.io.wavfile.write("whatstheweatherlike.wav", 8000, numpy_data)
data is getting appended in the existing wav file but the wav file is getting corrupted when i am trying to play in a media player
With wave library you can do that with something like:
import wave
audiofile1="youraudiofile1.wav"
audiofile2="youraudiofile2.wav"
concantenated_file="youraudiofile3.wav"
frames=[]
wave0=wave.open(audiofile2,'rb')
frames.append([wave0.getparams(),wave0.readframes(wave0.getnframes())])
wave.close()
wave1=wave.open(audiofile2,'rb')
frames.append([wave1.getparams(),wave1.readframes(wave1.getnframes())])
wave1.close()
result=wave.open(concantenated_file,'wb')
result.setparams(frames[0][0])
result.writeframes(frames[0][1])
result.writeframes(frames[1][1])
result.close()
And the order of concatenation is exactly the order of the writing here :
result.writeframes(frames[0][1]) #audiofile1
result.writeframes(frames[1][1]) #audiofile2
I have a list of videos (all .mp4), that I would like to combine to one large .mp4 video. The names of the files are as following: vid0.mp4, vid1.mp4, vid2.mp4, .... After searching, I found a Quora question which explains that the main file should be opened, then all sub files should be read (as bits) and then written. So here is my code:
import os
with open("MainVideo.mp4","wb") as f:
for video in os.listdir("/home/timmy/sd/videos/"):
temp=open('/home/timmy/sd/videos/%s'%video)
h=temp.read()
'''
for i in h:
f.write(i) #Error
'''
f.write(h)
temp.close()
This is only writing the first video. Is there a way to write it without using outside libraries? If not, please refer me to one.
I also tried the moviepy library but I get OSError.
code:
from moviepy.editor import VideoFileClip, concatenate_videoclips
li1=[]
for i in range(0,30):
name_of_file = "/home/timmy/sd/videos/vid%d.mp4"%i
clip = VideoFileClip(name_of_file)
#print(name_of_file)
li1.append(clip)
I get OSError after the 9th clip. (I think this is because of the size of the list.)