I am using the following code to animate gifs on tkinter.
class gif:
def __init__(self,filename):
fIndex = 0
global frames
frames = []
global lastFrame
while True:#infinite loop
try:#executes following code if possible
part = 'gif -index {}'.format(fIndex)#takes an index of a frame of gif
frame = PhotoImage(file=filename,format=part)
print(str(fIndex))
#converts each frame to a PhotoImage
except:#once the last frame is reached, this is executed
lastFrame = fIndex - 1#last frame index saved
break #while loop ends
frames.append(frame) #photoimage of frame added to list
fIndex += 1 #index incremented
def animate(self,fNumber):#function to animate gif
if fNumber > lastFrame: #if the last frame is reached,
fNumber = 0 #index changed to 0 so it plays again
canvas.create_image(700,350,image=frames[fNumber],anchor=CENTER)
master.after(20,self.animate,fNumber+1)#each frame played with
#20 milliseconds in between
matr1x = gif('spinningmatrix.gif')
matr1x.animate(0)
this works for some of my gifs but not all of them. For the ones that don't work, when I print fIndex it only displays 0 and 1 and only the first frame is displayed on the GUI. the ones that do work have more indexes displayed. just wondering why some don't work and why it only gives me 0 and 1 as the indexes. any help would be appreciated.
edit: I split the gifs into frames online and for 2 of the gifs that work properly, my program gives the correct number of frames. for one that works but is a bit off, the program says 3 frames but online says 20. for the other 2 that don't work at all, the number of frames differ very much online.
Related
I need to get some groups of consecutive frames from one video. The groups are all formed by the same amount of frames, and they are consecutive, so something like:
[10, 11, 12, 13, 14, 15],
[32, 33, 34, 35, 36, 37],
[123, 124, 125, 126, 127, 128]
this is just an example, in my case I don't know the ranges of indices in advance, they are provided during the code execution!
I tried to use (e.g., as suggested here) this code every time I have to extract the frames in a given range
video_capture = cv2.VideoCapture(video_path)
for frame_id in frames_range:
video_capture.set(cv2.CAP_PROP_POS_FRAMES, frame_ind)
success, image = video_capture.read()
if not success:
logging.warning(f"Error reading frame {frame_id}", frame_ind)
# do something with frames
But this is not working, I'm generally always getting the first N frames of the video (N is the length of the ranges, 6 in my example).
I read that video_capture.set could fail in some cases due to the video compression and, as a matter of fact, I am getting some errors when reading certain video frames with this approach.
Another simple approach is to go every time through the video frame by frame, and select the ones I need:
def get_required_frames(frames_range: list, video_cap) -> list:
frames = []
frame_counter = 0
while True:
ret, frame = video_cap.read()
if not ret:
return []
# collect frames in selected range
if frame_counter in frames_range:
frames.append(frame)
# after selected range, stop loop
if frame_counter > frames_range[-1]:
break
frame_counter += 1
return frames
# every time I get the required range, I call this function
frames = get_required_frames(frames_range, cv2.VideoCapture(video_path))
# do something with frames
This second method is working ok, but it's kinda slow, since I have to go through the whole video from frame 0 until I reach the current frame range every time (and some of the videos can be quite long).
Considering the frame ranges are always consecutive, is there any way to leave the video..."hanging", so that at least I can start again from the last position used in the previous range of frames?
I'm not sure if this is a typo created while simplifying the code for this post, but this seems wrong:
for frame_id in frames_range:
video_capture.set(cv2.CAP_PROP_POS_FRAMES, frame_ind)
You are getting 'frame_id' from the 'frames_range' but then setting the index of video_capture to 'frame_ind'.
If that's not the issue then yes you're right - setting frame index via 'CAP_PROP_POS_FRAMES' is not necessarily reliable. In answer to your question if you can leave the video_capture 'hanging' the answer is yes. You should be able to do the following:
cap = cv2.VideoCapture(video_path)
frames = get_required_frames(frames_range, cap)
When you read from 'cap' again, it should be where you left it.
Im trying to develop a simple code using opencv and python. My idea is the following:
I have a video with a moving object(free falling or parabolic) and I've managed to separate the video in frames. What I need (and I'm a total newby in this aaaand have little time) is to extract coordinates of the object frame by frame. So the idea is to click on the moving object, and get the (x, y) coordinate and the number of frame and open the next frame so I can do the same. So basically something with clicks over the foto and extracting the data in a csv and showing the next frame. Thus I could study its movement through space with its velocity and acelerations and stuff.
Haven't written any code yet.
Thanks in advance.
Look at docs example with using mouse input in opencv w python:
mouse example - opencv
You can define callback reading the click coordinates:
def get_clic_point(event,x,y,flags,param):
if event == cv.EVENT_LBUTTONDBLCLK: # on left double click
print(x,y) # here goes Your sepcific code, using x, y as click coordinates
In main loop, You need to create window and supply it with callback method
cv.namedWindow('main_win')
cv.setMouseCallback('main_win',get_clic_point)
Then using window name (in this case 'main_win') as Your window handle, You can
show image, calling cv.imshow('main_win',img), where img is loaded image.
You can write simple loop like this:
cv.namedWindow('main_win')
cv.setMouseCallback('main_win',get_clic_point)
images = []
# read images to this list
# ...
i = 0
while(1):
cv.imshow('main_win',images[i])
k = cv.waitKey(1) & 0xFF
if k == ord('n'): # n for next
# change image
i = i + 1
i = i % len(images)
elif k == 27:
break
cv.destroyAllWindows()
then just substitiute call back with desired logic
Currently I am using 2017_08_31_0121.mp4 as my video which is a 21-second long video and once I break it into frames, I get 504 frames. This means that frame per second is set to 24. I want to change the number of frames but I do not know which part of the following code is responsible for setting frame per second.
Questions:
I thought for a long time that the default FPS is 25 but now I have 24, can you please let me know where the default FPS is set and what it is?
If I want to use a custom FPS let's say 10, how can I modify the following code to do it?
import cv2
vidcap = cv2.VideoCapture('/content/2017_08_31_0121.mp4')
success, image = vidcap.read()
count = 0
while success:
if count<10:
id = f'00{count}'
elif count < 100:
id = f'0{count}'
else:
id = count
cv2.imwrite(f"./new_frames/frame{id}.jpg", image) # save frame as JPEG file
success, image = vidcap.read()
count += 1
You need to know how "video" works.
Video consists of keyframes and P/B-frames. Keyframes are complete images on their own. To decode a P/B-frame, preceding frames need to be decoded first. Some video files consist only of keyframes ("intra"). Some video files consist of a keyframe every ~0.1-10 seconds, and only P/B-frames in between.
You can skip around in a video, but you can only directly skip to keyframes. If you wanted to skip to a non-keyframe, you'd have to first skip to the preceding keyframe, and then decode each following frame, until you're at the destination.
Ideas I would recommend that you not follow:
ffmpeg -i INPUT -r 3 OUTPUT would read the entire video, and duplicate/drop frames as needed to achieve 3 frames per second, while maintaining the "speed" of what you see in the video. It would also have to re-encode the result. That's only a sensible option if you need to read the same video repeatedly in that frame rate.
Involving GNU Parallel with ffmpeg would be pointless because ffmpeg itself runs its decoding and encoding in parallel (for most codecs), using all available CPU.
Here is what you can do:
use the methods grab and retrieve of VideoCapture. Call grab repeatedly. This does the minimal work to decode a frame and advance in the video. Call retrieve for the frame you actually want. This does the rest of the work and it will give you that frame as an array.
You would have to check the video's fps value using vidcap.get(cv.CAP_PROP_FPS) and then count along and decide if you only need to grab, or both grab and retrieve.
import numpy as np
import cv2 as cv
vidcap = cv2.VideoCapture('/content/2017_08_31_0121.mp4')
assert vidcap.isOpened()
fps_in = vidcap.get(cv.CAP_PROP_FPS)
fps_out = 3
index_in = -1
index_out = -1
while True:
success = vidcap.grab()
if not success: break
index_in += 1
out_due = int(index_in / fps_in * fps_out)
if out_due > index_out:
success, frame = vidcap.retrieve()
if not success: break
index_out += 1
# do something with `frame`
My problem here is that when I extracting a video into a frame using opencv, sometimes the frame that I get will flip up which happened to me for both my machine(window) and VM(ubuntu) But some of the video I tested, frames are not flip. So, I wonder what factor or what should be changed/added in my code to make the extract fixed without a flip
def extract_frame(video,folder):
global fps
os.mkdir('./green_frame/{folder}/'.format(folder=folder))
vidcap = cv2.VideoCapture(video)
success,image = vidcap.read()
fps = vidcap.get(cv2.CAP_PROP_FPS)
count = 0
success = True
while success: #os.path.join(pathOut,(name+'.png'))
cv2.imwrite(os.path.join('./green_frame/{folder}/'.format(folder=folder),"frame%d.png" % count), image)
success,image = vidcap.read()
print('Read a new frame: ', success)
count += 1
This is the example of frame I get from this code.
Which my orginal video that I used is upside down like this:
So, in my case, what I have to changed to make it not flip like my first picture. Is it relate to the resolution or framerate of the video? I tested with a 1280x720 resolution video and all of the frame extracted are flipped upside down but a frame from video with a 568x320 is normal
Thank you
Edit:
So, I look at the information of the video and I found out that in the metadata, it has rotate 180 for the video that extract to an upside down frame
But when I check with a normal video that produced a non upside-down frame, it does not have rotate:180
So from this, how can I deal with a video that has a rotation?
Update
This problem can now be solved by simply updating to OpenCV v4.5 and above.
If upgrading is a problem follow the old answer, below.
For anyone still looking into this, I was just stuck on the same problem. Turns out some Android phones and iPhones take images/frames in landscape and convert them on the fly according to the exif 'rotate' tag to display the images/frames.
Weird design choice in OpenCV is that cv2.imread(img_file) already reads the image in correct orientation by reading the image's rotate tag, but the cv2.VideoStream's read() method does not do this.
So, to fix this I used ffmpeg to read the 'rotate' tag and rotate the video frame to its correct orientation.(Big thanks to the comments above, for pointing me in the right direction 👍)
Following is the code:
Make sure you have ffmpeg for python. (pip install ffmpeg-python)
Create a method to check if rotation is required by the video_file:
import ffmpeg
def check_rotation(path_video_file):
# this returns meta-data of the video file in form of a dictionary
meta_dict = ffmpeg.probe(path_video_file)
# from the dictionary, meta_dict['streams'][0]['tags']['rotate'] is the key
# we are looking for
rotateCode = None
if int(meta_dict['streams'][0]['tags']['rotate']) == 90:
rotateCode = cv2.ROTATE_90_CLOCKWISE
elif int(meta_dict['streams'][0]['tags']['rotate']) == 180:
rotateCode = cv2.ROTATE_180
elif int(meta_dict['streams'][0]['tags']['rotate']) == 270:
rotateCode = cv2.ROTATE_90_COUNTERCLOCKWISE
return rotateCode
Create a method to correct the rotation of the frame in video file:
def correct_rotation(frame, rotateCode):
return cv2.rotate(frame, rotateCode)
Finally, do this in your main loop:
# open a pointer to the video file stream
vs = cv2.VideoCapture(video_path)
# check if video requires rotation
rotateCode = check_rotation(video_path)
# loop over frames from the video file stream
while True:
# grab the frame from the file
grabbed, frame = vs.read()
# if frame not grabbed -> end of the video
if not grabbed:
break
# check if the frame needs to be rotated
if rotateCode is not None:
frame = correct_rotation(frame, rotateCode)
# now your logic can start from here
Hope this helps 🍻
Sometimes the following will solve the problem of opening some videos upside-down.
cap = cv2.VideoCapture(path, apiPreference=cv2.CAP_MSMF)
When I recently ran into this issue, I found that all that was required was to update to OpenCV v4.5:
This is the related Github issue: https://github.com/opencv/opencv/issues/15499
Here is the commit: https://github.com/opencv/opencv/commit/f0271e54d90b3af62301f531f5f00995b00d7cd6
The rotate tag is optional so the check_rotation will fail,
This code fix it:
def check_rotation(path_video_file):
# this returns meta-data of the video file in form of a dictionary
meta_dict = ffmpeg.probe(path_video_file)
# from the dictionary, meta_dict['streams'][0]['tags']['rotate'] is the key
# we are looking for
rotate_code = None
rotate = meta_dict.get('streams', [dict(tags=dict())])[0].get('tags', dict()).get('rotate', 0)
return round(int(rotate) / 90.0) * 90
I would just do this in your frame processing loop:
frame = cv2.flip(frame,0)
The 0 flips vertically, see Open CV documentation for more info.
I'm working with a code that analyzes frames from a live stream with OpenCV and if a condition is met saves the current frame to file.
The infinite loop to analyze the video frame by frame is something like this:
while True:
ret,frame = stream.read()
if conditionisMet :
pil_image = Image.fromarray(frame)
pil_image.save("/path/to/folder/image.jpg")
cv2.imshow("LiveStream", frame)
What I want to add is that if the condition is met again too soon (20-30 sec) the image does not have to be saved and the while loop has to grab another frame and continue its work. I've tried with time.sleep(30.0) inside the if statement but it blocks the while loop waiting for the 30 sec to pass. Is there a way to use time.sleep in this case, or another method suitable for my needs?
Thanks in advance
you could do something like this:
last_grab=time.time()-30 # this to get things started
while True:
if condition and time.time()-last_grab > 30:
last_grab=time.time()
# Do things here
else:
continue
Just add a variable to keep track of your last saving time:
last_save_time = time.time()
while True:
ret,frame = stream.read()
if conditionisMet and time.time() - last_save_time() > 20:
pil_image = Image.fromarray(frame)
pil_image.save("/path/to/folder/image.jpg")
# update last save time
last_save_time = time.time()
cv2.imshow("LiveStream", frame)
Why not just capture the amount of time running then save image if greater than the given amount of time....
a = dt.now()
b = dt.now()
c = b - a
if c < x:
do something