In python, im loading in a gif with PIL. I extract the first frame, modify it, and put it back. I save the modified gif with the following code
imgs[0].save('C:\\etc\\test.gif',
save_all=True,
append_images=imgs[1:],
duration=10,
loop=0)
Where imgs is an array of images that makes up the gif, and duration is the delay between frames in milliseconds. I'd like to make the duration value the same as the original gif, but im unsure how to extract either the total duration of a gif or the frames displayed per second.
As far as im aware, the header file of gifs does not provide any fps information.
Does anyone know how i could get the correct value for duration?
Thanks in advance
Edit: Example of gif as requested:
Retrived from here.
In GIF files, each frame has its own duration. So there is no general fps for a GIF file. The way PIL supports this is by providing an info dict that gives the duration of the current frame. You could use seek and tell to iterate through the frames and calculate the total duration.
Here is an example program that calculates the average frames per second for a GIF file.
import os
from PIL import Image
FILENAME = os.path.join(os.path.dirname(__file__),
'Rotating_earth_(large).gif')
def get_avg_fps(PIL_Image_object):
""" Returns the average framerate of a PIL Image object """
PIL_Image_object.seek(0)
frames = duration = 0
while True:
try:
frames += 1
duration += PIL_Image_object.info['duration']
PIL_Image_object.seek(PIL_Image_object.tell() + 1)
except EOFError:
return frames / duration * 1000
return None
def main():
img_obj = Image.open(FILENAME)
print(f"Average fps: {get_avg_fps(img_obj)}")
if __name__ == '__main__':
main()
If you assume that the duration is equal for all frames, you can just do:
print(1000 / Image.open(FILENAME).info['duration'])
Related
I am trying to create videos consisted different frame rates (5, 10, 20, 30, 40) from a 50 frame video. I want the first and the last frames to be stable (first: 0.00 sec frame, last: 0.98 sec frame). The other ones will be selected randomly from 50 frames of the video.
I found the codes from the internet, so I could not completely understand it. For this reason, the changes that I made do not work.
Actually, the code in below works (at least it does not give any error) but when I try to play the output, I cannot because the video player says that the video is broken. At first, I tried to cut the first 5,10,20,,30,40th frames of the video and it did work. However now I have to put frames randomly between the beginning and the end.
And also I have to insert the frame rates and the video name manually since I could not do it in a loop (I have 6 different videos and 5 different fps rates to cut).
I hope I could explain it clearly and I hope someone can help me because I've been trying the solve this for a looong time. :(
Thanks in advance!!
import cv2
import numpy as np
import os
import random
from os.path import isfile, join
# Cutting frames
vidcap = cv2.VideoCapture('/work/rubbing_dooh_50fps.mp4')
def getFrame(sec):
vidcap.set(cv2.CAP_PROP_POS_MSEC,sec*1000)
hasFrames,image = vidcap.read()
if hasFrames:
cv2.imwrite("framecatch/rubbing_dooh_frame "+str(sec)+" sec.jpg", image) # save frame as JPG file
return hasFrames
sec = 0
frameRate = 0.02
success = getFrame(sec)
while success:
sec = sec + frameRate
sec = round(sec, 2)
success = getFrame(sec)
# Creating a video from the frames
def convert_frames_to_video(pathIn,pathOut,fps):
frame_array = []
files = [f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]
for i in random.sample(range(1,48),3):
filename=pathIn + files[i]
#reading each files
img = cv2.imread(filename)
beg = pathIn + files[47] #corresponding first frame
end = pathIn + files[22] #corresponding last frame
imgbeg = cv2.imread(beg)
imgend = cv2.imread(end)
height, width, layers = imgbeg.shape
size = [height, width]
print(filename)
#inserting the frames into an image array
frame_array.append(img)
frame_array.append(imgbeg) #probably i shouldn't use append here
frame_array.append(imgend)
out = cv2.VideoWriter(pathOut,cv2.VideoWriter_fourcc(*'mp4v'), fps, size)
for i in range(len(frame_array)):
# writing to a image array
a = out.write(frame_array[i])
out.release()
def main():
pathIn= '/work/framecatch/'
pathOut = 'rubbing_dooh_5fps.mp4'
fps = 5.0
convert_frames_to_video(pathIn, pathOut, fps)
if __name__=="__main__":
main()
I have an hour-long video that I would like to save a clip between two timestamps-- say, 11:20-11:35. Is the best way to do this frame-by-frame, or is there a better way?
Here's the gist of what I did frame-by-frame. If there's a less lossy way to do it, I'd love to know! I know I could do it from the terminal using ffmpeg, but I am curious for how to best do it using cv2.
def get_clip(input_filename, output_filename, start_sec, end_sec):
# input and output videos are probably mp4
vidcap = cv2.VideoCapture(input_filename)
# math to find starting and ending frame number
fps = find_frames_per_second(vidcap)
start_frame = int(start_sec*fps)
end_frame = int(end_sec*fps)
vidcap.set(cv2.CAP_PROP_POS_FRAMES,start_frame)
# open video writer
vidwrite = cv2.VideoWriter(output_filename, cv2.VideoWriter_fourcc(*'MP4V'), fps, get_frame_size(vidcap))
success, image = vidcap.read()
frame_count = start_frame
while success and (frame_count < end_frame):
vidwrite.write(image) # write frame into video
success, image = vidcap.read() # read frame from video
frame_count+=1
vidwrite.release()
I'm trying to create a data set from an avi file I have and I know I've made a mistake somewhere.
The Avi file I have is 1,827 KB (4:17) but after running my code to convert the frames into arrays of number I now have a file that is 1,850,401 KB. This seems a little large to me.
How can I reduce the size of my data set / where did I go wrong?
# Program To Read video
# and Extract Frames
import cv2
import numpy as np
import time
# Function to extract frames
def FrameCapture(path):
# Path to video file
vidObj = cv2.VideoCapture(path)
# Used as counter variable
count = 0
# checks whether frames were extracted
success = 1
newDataSet = []
try:
while success:
# vidObj object calls read
# function extract frames
success, image = vidObj.read()
img_reverted = cv2.bitwise_not(image)
new_img = img_reverted / 255.0
newDataSet.append(new_img)
#new_img >> "frame%d.txt" % count
# Saves the frames with frame-count
#cv2.imwrite("frame%d.jpg" % count, image)
count += 1
except:
timestr = time.strftime("%Y%m%d-%H%M%S")
np.save("DataSet" + timestr , newDataSet)
# Driver Code
if __name__ == '__main__':
# Calling the function
FrameCapture("20191212-150041output.avi")
I'm going to guess that the video mainly consist of similar pixels blocked together that the video have compressed to such a low file size. When you load single images into arrays all that compression goes away and depending on the fps of the video you will have thousands of uncompressed images. When you first load an image it will be saved as a numpy array of dtype uint8 and the image size will be WIDTH * HEIGHT * N_COLOR_CHANNELS bytes. After you divide it with 255.0 to normalize between 0 and 1 the dtype changes to float64 and the image size increases eightfold. You can use this information to calculate expected size of the images.
So your options is to either decrease the height and width of your images (downscale), change to grayscale or if your application allows it to stick with uint8 values. If the images doesn't change too much and you don't need thousands of them you could also only save every 10th or whatever seems reasonable. If you need them all as is but they don't fit in memory consider using a generator to load them on demand. It will be slower but at least it will run.
I have two videos. One at 10 fps and other one is the same video and 10 fps but it is interpolated from the same videos 5 fps version. I want to see how accurate the frame interpolation by comparing the RGB values of every frame. I can extract every frame from both videos. However, I can only get the RGB values of only 1 frame. I use the following code:
from PIL import Image
im = Image.open('frame1.jpg')
pix = im.load()
for x in range(0,640):
for y in range(0,480):
print pix[x,y]
This code can only found the RGB values in 1 frame and I have hundreds of frames. The frames of my original video are named frame1.jpg frame2.jpg ... frame 100.jpg etc. and other videos frames are saved as frames1.jpg frames2.jpg ... frames100.jpg etc. Is there a way to automate it
You can use glob (comes natively with Python) to load all the images and store them simultaneously, or process them one at a time.
import glob
from PIL import Image
for frames in glob.glob({path}+"*.jpg"):
im = Image.open(frames)
pix=im.load()
for x in range(0,640):
for y in range(0,480):
print pix[x,y]
This will do what you did, but it will loop over all files in path with JPG format. If you want something more specific, you can add requests and I'll add to my answer. But this will sequentially load all images, allowing you to process them subsequently.
Put all that stuff in a loop.
for i in range(1, 101):
file = 'frame%d.jpg'%i
im = Image.open(file)
# ...
import cv
# create a window
winname = "myWindow"
win = cv.NamedWindow(winname, cv.CV_WINDOW_AUTOSIZE)
# load video file
invideo = cv.CaptureFromFile("video.avi")
# interval between frame in ms.
fps = cv.GetCaptureProperty(invid, cv.CV_CAP_PROP_FPS)
interval = int(1000.0 / fps)
# play video
while (True):
im = cv.QueryFrame(invideo)
cv.ShowImage(winname, im)
if cv.WaitKey(interval) == 27: # ASCII 27 is the ESC key
break
del invideo
cv.DestroyWindow(winname)
Above is a simple python code using opencv libraray to play a video file.
The only part I don't understand is im = cv.QueryFrame(invideo)
According to opencv api, " QueryFrame grabs a frame from a camera or video file, decompresses it and returns it."
For my understanding, it just returns an image in iplimage format for one single frame, but how does it know which frame it returns? The only parameter QueryFrame need is the video capture, but there no index to tell it which frame amount the video frames I need to retrieve. What if I need to play a video starting from middle part?
You have to use cv.GetCaptureProperty with CV_CAP_PROP_FRAME_COUNT to get the number of frames of your video.
Divide it by 2 to find the middle.
Use QueryFrame until you reach this value.