Hi so we are using python to concatenate two videos together but we are facing issue to make the video visible on android and iOS device. By using this the second video which needs to be attached in the end, breaks down into small chunks and becomes distorted. I have attached the output of the video also. Watch the last five seconds of that video.
Output :
https://drive.google.com/file/d/1uwhUG03Q_92QmHiFd_VrecBwbWGTfwvP/view
we are using this :
# Import everything needed to edit video clips
from moviepy.editor import *
# loading video dsa gfg intro video
clip = VideoFileClip("v4.mp4")
# getting subclip as video is large
clip1 = clip.subclip(0, 5)
# loading video gfg
clipx = VideoFileClip("v3.mp4")
# getting subclip
clip2 = clipx.subclip(0, 3)
clip2.ipython_display(width = 480)
# clip list
clips = [clip1, clip2]
# concatenating both the clips
final = concatenate_videoclips(clips)
#final_clip.write_videofile("new.mp4")
# showing final clip
final.ipython_display(width = 480)
I want to compress a GIF image by extracting 15 frames from the GIF that preferably should be distinct.
I'm using Python and Pillow library and I didn't find any way to get the number of frames a GIF has in the Pillow docs. Neither did I find how to extract a specific frame from a GIF, because Pillow restricts that.
Is there any way to extract frames without iterating through each frame consequently?
Is there a more advanced Python library for GIF processing?
Here is an extension of #radarhere's answer that divides the .gif into num_key_frames different parts and saves each part to a new image.
from PIL import Image
num_key_frames = 8
with Image.open('somegif.gif') as im:
for i in range(num_key_frames):
im.seek(im.n_frames // num_key_frames * i)
im.save('{}.png'.format(i))
The result is somegif.gif broken into 8 pieces saved as 0..7.png.
For the number of frames, you are looking for n_frames. Take a look at here.
from PIL import Image
im = Image.open('test.gif')
print("Number of frames: "+str(im.n_frames))
For extracting a single frame -
im.seek(20)
im.save('frame20.jpg')
The real working solution to extract proper frames of any GIF file:
BigglesZX/gifextract.py
If you have tf imported you can:
def load_gif(file_path):
with tf.io.gfile.GFile(file_path, 'rb') as f:
video = tf.io.decode_gif(f.read())
return np.array(video)
I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here.
You're correct. This issue arises commonly when the video duration info is missing from the file.
Here's a thread on the issue: GitHub moviepy issue 116
One user proposed the solution of using MP4Box to convert the video using this guide: RASPIVID tutorial
The final solution that worked for me involved specifying the path to ImageMagick's binary file as WDBell mentioned in this post.
I had the path correctly set in my environment variables, but it wasn't till I specificaly defined it in config_defaults.py that it started working:
I solved it in a simpler way, with the help of VLC I converted the file to the forma MPEG4 xxx TV/device,
and you can now use your new file with python without any problem
xxx = 720p or
xxx = 1080p
everything depends on your choice on the output format
I already answered this question on the blog: https://github.com/Zulko/moviepy/issues/116
This issue appears when VideoFileClip(file) function from moviepy it looks for the duration parameter in the video file and it's missing. To avoid this (in those corrupted files cases) you should make sure that the total frames parameter is not null before to shoot the function: clip = mp.VideoFileClip("/home/test/"+file)
So, I handled it in a simpler way using cv2.
The idea:
find out the total frames
if frames is null, then call the writer of cv2 and generate a temporary copy of the video clip.
mix the audio from the original video with the copy.
replace the original video and delete copy.
then call the function clip = mp.VideoFileClip("/home/test/"+file)
Clarification: Since OpenCV VideoWriter does not encode audio, the new copy will not contain audio, so it would be necessary to extract the audio from the original video and then mix it with the copy, before replacing it with the original video.
You must import cv2
import cv2
And then add something like this in your code before the evaluation:
cap = cv2.VideoCapture("/home/test/"+file)
frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f'Checking Video {count} Frames {frames} fps: {fps}')
This will surely return 0 frames but should return at least framerate (fps).
Now we can set the evaluation to avoid the error and handle it making a temp video:
if frames == 0:
print(f'No frames data in video {file}, trying to convert this video..')
writer = cv2.VideoWriter("/home/test/fixVideo.avi", cv2.VideoWriter_fourcc(*'DIVX'), int(cap.get(cv2.CAP_PROP_FPS)),(int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while True:
ret, frame = cap.read()
if ret is True:
writer.write(frame)
else:
cap.release()
print("Stopping video writer")
writer.release()
writer = None
break
Mix the audio from the original video with the copy. I have created a function for this:
def mix_audio_to_video(pathVideoInput, pathVideoNonAudio, pathVideoOutput):
videoclip = VideoFileClip(pathVideoInput)
audioclip = videoclip.audio
new_audioclip = CompositeAudioClip([audioclip])
videoclipNew = VideoFileClip(pathVideoNonAudio)
videoclipNew.audio = new_audioclip
videoclipNew.write_videofile(pathVideoOutput)
mix_audio_to_video("/home/test/"+file, "/home/test/fixVideo.avi", "/home/test/fixVideo.mp4")
replace the original video and delete copys:
os.replace("/home/test/fixVideo.mp4", "/home/test/"+file)
I had the same problem and I have found the solution.
I don't know why but if we enter the path in this method path = r'<path>' instead of ("F:\\path") we get no error.
Just click on the
C:\Users\gladi\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py
and delete the the code and add this one
Provided by me in GITHUB - https://github.com/dudegladiator/Edited-ffmpeg-for-moviepy
clip1=VideoFileClip('path')
c=clip1.duration
print(c)
I found a way to extract a frame from a video, using cv2.
but What i also want to achieve is: (press pause -while the video is playing- so the frame is captured and the current time of the video is taken, So I can make a comment or whatever)
my code for extracting the frame
import cv2
vidcap = cv2.VideoCapture('d:/final.mp4')
vidcap.set(cv2.CAP_PROP_POS_MSEC,25505) # just cue to 20 sec. position
success,image = vidcap.read()
if success:
cv2.imwrite("frame22sec.jpg", image) # save frame as JPEG file
cv2.imshow("20sec",image)
cv2.waitKey()
I failed in my research (8 hours), please direct me to a proper method
I am using python34
I am using Python Audiotools library to access raw data of a song. When I convert the .flac to .wv and then to_pcm(), and do a pcm.read(), it shows me only the first 88200 frames of the song instead of the entire 13397580 frames. These frames that it shows are correct. I cross-checked against Audacity. Could anyone help me as to why this could be happening? I am sampling at 44.1kHz. So 88200 frames means it shows me exactly the first 2 seconds.
Here is my code
import os
from audiotools import *
files = os.listdir('./')
stream = open(files[3])
wave = stream.convert("sample.wv",WavPackAudio)
pcm_wave = wave.to_pcm()
print len(pcm_wave.read())
for frame in frames:
print frame,
print "\t",