I'm processing a video file and decided to split it up into equal chunks for parallel processing with each chunk running on its own process. I generate this series of video files that I want to connect together to make the original video.
I'm wondering what's the most efficient way of stringing these videos together without having to append frame by frame? (and ideally deleting the video files after they are read so I'm only left with one big video).
I wanted a programmatic solution oppose to a command. I found moviepy very useful for concatenating videos (its based on ffmpeg). Natsort is very useful for organizing the files by numerical order.
from moviepy.editor import VideoFileClip, concatenate_videoclips
from natsort import natsorted
#path is path to folder of videos
def concatVideos(path) :
currentVideo = None
#List all files in the directory and read points from text files one by one
for filePath in natsorted(os.listdir(path)):
if filePath.endswith(".mov"):
if currentVideo == None:
currentVideo = VideoFileClip(path + filePath)
continue
video_2 = VideoFileClip(path+filePath)
currentVideo = concatenate_videoclips([currentVideo,video_2])
currentVideo.write_videofile("export".mp4")
Related
Soooo, I have no clue how to do this and am completely new to python however,
I would like to select all images in a folder and turn them into 10 second individual videos for each image using MoviePy.
but instead of combining all the images together into one video, I want to make multiple videos for each individual image in the folder. From what I know you can use import glob to get all images in the folder but that's about it.
I have tried:
clips = [ImageClip(clip).set_duration(10) for clip in glob("imagesfolder/*.jpg")]
video_clip = concatenate_videoclips(clips, method="compose")
audioclip = AudioFileClip("backgroundmusicforduck.mp3")
new_audio = afx.audio_loop(audioclip, duration=video_clip.duration)
video_clip.audio = new_audio
video_clip.write_videofile("memes.mp4", fps=24, remove_temp=True, codec="libx264", audio_codec="aac")
but all this does is combine all the images together into one video..
Thanks!!!
I have a list of videos (all .mp4), that I would like to combine to one large .mp4 video. The names of the files are as following: vid0.mp4, vid1.mp4, vid2.mp4, .... After searching, I found a Quora question which explains that the main file should be opened, then all sub files should be read (as bits) and then written. So here is my code:
import os
with open("MainVideo.mp4","wb") as f:
for video in os.listdir("/home/timmy/sd/videos/"):
temp=open('/home/timmy/sd/videos/%s'%video)
h=temp.read()
'''
for i in h:
f.write(i) #Error
'''
f.write(h)
temp.close()
This is only writing the first video. Is there a way to write it without using outside libraries? If not, please refer me to one.
I also tried the moviepy library but I get OSError.
code:
from moviepy.editor import VideoFileClip, concatenate_videoclips
li1=[]
for i in range(0,30):
name_of_file = "/home/timmy/sd/videos/vid%d.mp4"%i
clip = VideoFileClip(name_of_file)
#print(name_of_file)
li1.append(clip)
I get OSError after the 9th clip. (I think this is because of the size of the list.)
I am trying to read several images from archive with skimage.io.imread_collection, but for some reason it throws an error:
"There is no item named '00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552/masks/*.png' in the archive".
I checked several times, such directory exists in archive and with *.png I just specify that I want to have all images in my collection, and imread_collection works well, when I am trying to download images not from archive, but from extracted folder.
#specify folder name
each_img_idx = '00071198d059ba7f5914a526d124d28e6d010c92466da21d4a04cd5413362552'
with zipfile.ZipFile('stage1_train.zip') as archive:
mask_ = skimage.io.imread_collection(archive.open(str(each_img_idx) + '/masks/*.png')).concatenate()
May some one explain me, what's going on?
Not all scikit-image plugins support reading from bytes, so I recommend using imageio. You'll also have to tell ImageCollection how to access the images inside the archive, which is done using a customized load_func:
from skimage import io
import imageio
archive = zipfile.ZipFile('foo.zip')
images = [f.filename for f in zf.filelist]
def zip_imread(fn):
return imageio.imread(archive.read(fn))
ic = io.ImageCollection(images, load_func=zip_imread)
ImageCollection has some benefits like not loading all images into memory at the same time. But if you simply want a long list of NumPy arrays, you can do:
collection = [imageio.imread(zf.read(f)) for f in zf.filelist]
Is there a way to successfully always patch up any clips together in such a way that prevents weird glitches? I put together a .mp4 from smaller .mp4 files and I got a final video with weird glitches. I am running Python 3.6.1 on Windows 10 through Sublime Text 3. I used MoviePy to do the concatenation.
The code:
from moviepy.editor import VideoFileClip, concatenate_videoclips
import os.path
path = "C:/Users/blah/videos/out/"
cliparray = []
for filename in os.listdir(path):
cliparray.append(VideoFileClip(path + filename))
final_clip = concatenate_videoclips(cliparray)
final_clip.write_videofile(path + "concatenatedvideo.mp4", codec = "libx264")
The weird glitches:
One of the clips turns into a 3x3 grid of smaller clips.
Another has the audio not lined up with the video
Another is sped up faster than what was normal.
I had also glitch while concatenating different video clips. Some had different resolutions and that was making output video file with some sort of glitches. I fixed it with
final_clip = concatenate_videoclips(cliparray, method='compose')
Resulting output was without any glitch, but since they have different resolutions, the moviepy assigns highest resolution among video clips. To fix this you might just crop to same size.
from moviepy.editor import *
#load video 1 in to variable
video_1 = VideoFileClip('video1.mp4')
#load video 2 in to variable
video_2 = VideoFileClip('video2.mp4')
clips = [video_1, video_2]
# concatenating both the clips
final = concatenate_videoclips(clips,method='compose')
#writing the video into a file / saving the combined video
final.write_videofile("merged.mp4")
I'd like to use pyDub to take a long WAV file of individual words (and silence in between) as input, then strip out all the silence, and output the remaining chunks is individual WAV files. The filenames can just be sequential numbers, like 001.wav, 002.wav, 003.wav, etc.
The "Yet another Example?" example on the Github page does something very similar, but rather than outputting separate files, it combines the silence-stripped segments back together into one file:
from pydub import AudioSegment
from pydub.utils import db_to_float
# Let's load up the audio we need...
podcast = AudioSegment.from_mp3("podcast.mp3")
intro = AudioSegment.from_wav("intro.wav")
outro = AudioSegment.from_wav("outro.wav")
# Let's consider anything that is 30 decibels quieter than
# the average volume of the podcast to be silence
average_loudness = podcast.rms
silence_threshold = average_loudness * db_to_float(-30)
# filter out the silence
podcast_parts = (ms for ms in podcast if ms.rms > silence_threshold)
# combine all the chunks back together
podcast = reduce(lambda a, b: a + b, podcast_parts)
# add on the bumpers
podcast = intro + podcast + outro
# save the result
podcast.export("podcast_processed.mp3", format="mp3")
Is it possible to output those podcast_parts fragments as individual WAV files? If so, how?
Thanks!
The example code is pretty simplified, you'll probably want to look at the strip_silence function:
https://github.com/jiaaro/pydub/blob/2644289067aa05dbb832974ac75cdc91c3ea6911/pydub/effects.py#L98
And then just export each chunk instead of combining them.
The main difference between the example and the strip_silence function is the example looks at one millisecond slices, which doesn't count low frequency sound very well since one waveform of a 40hz sound, for example, is 25 milliseconds long.
The answer to your original question though, is that all those slices of the original audio segment are also audio segments, so you can just call the export method on them :)
update: you may want to take a look at the silence utilities I've just pushed up into the master branch; especially split_on_silence() which could do this (assuming the right specific arguments) like so:
from pydub import AudioSegment
from pydub.silence import split_on_silence
sound = AudioSegment.from_mp3("my_file.mp3")
chunks = split_on_silence(sound,
# must be silent for at least half a second
min_silence_len=500,
# consider it silent if quieter than -16 dBFS
silence_thresh=-16
)
you could export all the individual chunks as wav files like this:
for i, chunk in enumerate(chunks):
chunk.export("/path/to/ouput/dir/chunk{0}.wav".format(i), format="wav")
which would make output each one named "chunk0.wav", "chunk1.wav", "chunk2.wav", and so on