I wish to use python to open a video file (avi, wmv, mp4), determine the total number of frames contained within the video, and save an arbitrary frame from within the video as an image file.
I have looked at pyffmpeg, but I do not know how to obtain the total number of frames contained in the video without iterating over each (which is incredibly slow). My code to obtain the number of frames in a video is given below:
import pyffmpeg
stream = pyffmpeg.VideoStream()
stream.open('video.avi')
frame_no = 0
# Very inefficient code:
while (stream.GetFramNo(frame_no)):
frame_no=frame_no+1
Is there a way in which I can do this efficiently? If not, please suggest an alternative extension or approach; code fragments would be a nice bonus.
Related
Background
For a research project, we are recording video data from two cameras and feed a synchronization pulse directly into the microphone ADC every second.
Problem
We want to derive a frame time stamp in the clock of the pulse source for each camera frame to relate the camera images temporally. With our current methods (see below), we get a frame offset of around 2 frames between the cameras. Unfortunately, inspection of the video shows that we are clearly 6 frames off (at least at one point) between the cameras.
I assume that this is because we are relating audio and video signal wrong (see below).
Approach I think I need help with
I read that in the MP4 container, there should be PTS times for video and audio. How do we access those programmatically. Python would be perfect, but if we have to call ffmpeg via system calls, we may do that too ...
What we currently fail with
The original idea was to find video and audio times as
audio_sample_times = range(N_audiosamples)/audio_sampling_rate
video_frame_times = range(N_videoframes)/video_frame_rate
then identify audio_pulse_times in audio_sample_times base, calculate the relative position of each video_time to the audio_pulse_times around it, and select the same relative value to the corresponding source_pulse_times.
However, a first indication that this approach is problematic is already that for some videos, N_audiosamples/audio_sampling_rate differs from N_videoframes/video_frame_rate by multiple frames.
What I have found by now
OpenCV's cv2.CAP_PROP_POS_MSEC seems to do exactly what we do, and not access any PTS ...
Edit: What I took from the winning answer
container = av.open(video_path)
signal = []
audio_sample_times = []
video_sample_times = []
for frame in tqdm(container.decode(video=0, audio=0)):
if isinstance(frame, av.audio.frame.AudioFrame):
sample_times = (frame.pts + np.arange(frame.samples)) / frame.sample_rate
audio_sample_times += list(sample_times)
signal_f_ch0 = frame.to_ndarray().reshape((-1, len(frame.layout.channels))).T[0]
signal += list(signal_f_ch0)
elif isinstance(frame, av.video.frame.VideoFrame):
video_sample_times.append(float(frame.pts*frame.time_base))
signal = np.abs(np.array(signal))
audio_sample_times = np.array(audio_sample_times)
video_sample_times = np.array(video_sample_times)
Unfortunately, in my particular case, all pts are consecutive and gapless, so the result is the same as with the naive solution ...
By picture clues, we identified a section of ~10s in the videos, somewhere in which they desync, but can't find any traces of that in the data.
You need to run ffprobe to retrieve the PTS times. I don't know the exact command, but if you're ok with another package, try ffmpegio:
pip install ffmpegio-core
// OR
pip install ffmpegio // if you also want to use it to read video frames & audio samples
If you're on Windows, see this doc on where ffmpeg.exe can be found automatically.
Then if you can run
import ffmpegio
frames = ffmpegio.probe.frames('video.mp4', intervals=10)
This will return the frames info as a list of dicts of the first 10 packets (of mixed streams in the order of pts). If you remove the intervals argument, it'll retrieve every frame (will take a long time).
Inspect each dict of frames and decide which entries you need (say 'media_type', 'stream_index', pts and pts_time). Then add entries argument containing these:
frames = ffmpegio.probe.frames('video.mp4', intervals=10,
entries=['media_type', 'stream_index', 'pts','pts_time'])
Once you're happy with what it returns, incorporate to your program.
The intervals argument accepts many different formats, please read the doc.
What this or any other FFmpeg-based approach does not offer you is getting this info with the data frames. You need to read in the frame timing data separately and mesh them with the data yourself. If you prefer a solution with more control (but perhaps more coding) look into pyav, which interfaces the underlying library of FFmpeg. I'm fairly certain you can retrieve pts simultaneously with framedata.
Disclaimer: This function has not been tested extensively. So, you may encounter an issue. If you have, please report on GitHub and I'll fix it asap.
I wanna grab 5 frames of a video distributed evenly including first and last frame. The answer to this helped me to loop over the video and get frames. However I didn't find out how to know when it's gonna be the last frame. Also looping over the whole video seems a bit expensive.
Python - Extracting and Saving Video Frames
Is there a better way of getting 5 specific frames (e.g. every 20% of the video) or at least and easy way of getting the total frame number? I already tried multiplying duration and fps from the metadata, but those numbers seem to be rounded and give a wrong number.
Thank you for your help.
Whether or not this is possible depends on your container and codec being used. Assuming the codec allows retrieving the total number of frames, you can do something like this:
import imageio.v3 as iio
import numpy as np
my_video = "imageio:cockatoo.mp4"
props = iio.improps(my_video, plugin="pyav")
# Make sure the codec knows the number of frames
assert props.shape[0] != -1
for idx in np.linspace(0, props.shape[0]-1, 5, dtype=int):
# imageIO < 2.21.0
image = iio.imread(my_video, index=idx)
# imageIO > 2.21.0
# image = iio.imread(my_video, index=idx, plugin="pyav")
iio.imwrite(f"delete_me/frame_{idx:03d}.png", image)
A few notes:
you want to use pyav for the call to iio.improps, because it doesn't decode frames, so it is fast.
some codecs or containers (especially when streaming video) either don't report or can't report the number of frames stored within. In this case props.shape[0] will be -1 and we can't use this method.
normally I'd recommend using pyav for the imread call as well, but there is a bug in the plugin that causes an EoF exception when trying to index= to the last frame. This will be fixed by #855, though I can't say if it will make it into this weekly release or the next.
if the codec used doesn't guarantee constant framerate, you won't get any speed advantages over iio.imiter because we can't safely predict how far into the video to seek. If you know that your video has a constant framerate, however, you can use the constant_framerate kwarg of the pyav plugin to speed things up. (IMPORTANT: if the video has variable framerate and you set this anyway the seek will not be accurate.)
As usual, you can use iio.imopen if you want to do all of these operations without reopening the video each time.
I am fairly new to opencv and image editing, self learner you can say. I wanted have a poc of text morphing in videos like it happens with google lens but with the help of opencv.
I have achieved that for single video single run, but what I want to do is to take one input video, process it for the given positions of frames, save the output, then take that processed output as input for the next iteration and then save it after new edits are made.
I am trying to take data from a json file, which looks like this.
JSON FILE
Here is link to my code. I am a complete newbie trying to learn, so my methods and approch might be highly inefficient but I would appreciate any help.
https://colab.research.google.com/drive/1WJVklMHESUAOa5wlLfjjpPVjOSfKt2i5?usp=sharing
when you read the video till the end, it doesnt just reset.
so you need to reset the video on every loop. either open the videocapture again. move cap = cv2.VideoCapture(video_original) inside your loop for document in range
or set the frame to whatever start frame (eg 0) you want using cap.set(cv2.CAP_PROP_POS_FRAMES, self.frame_num) inside your loop
I can open a video and play it with opencv 2 using the cv2.VideoCapture(myvideo). But is there a way to delete a frame within that video using opencv 2? The deletion must happen in-place, that is, the file being played will end up with a shorter time due to deleted frames. Simply zeroing out the matrix wouldn't be sufficient.
For example something like:
video = cv2.VideoCapture(myvideo.flv)
while True:
img = video.read()
# Show the image
cv2.imgshow(img)
# Then go delete it and proceed to next frame, but is this possible?
# delete(img)??
So the above code would technically contain 0 bytes at the end since it reads then deletes the frame in the video file.
OpenCV is not the right tool for this job. What you need for this is a media processing framework, like ffmpeg (=libavformat/libavcodec/libswscale) or GStreamer.
Also depending on the encoding scheme used, simply deleting just a single frame may not be possible. Only in a video consisting of just Intra frames (I-frames), frame exact editing is possible. If the video is encoding in so called group of pictures (GOP) removing a single frame requires to reencode the whole GOP it was part of.
You can't do it in-place, but you can use OpenCV's VideoWriter to write the frames that you want in a new video file.
I am trying to extract the prevailing bitrate of a video file (e.g. .mkv file containing a movie) at a regular sampling interval of between 1-10 seconds under conditions of normal playback. Kind of like you may see in vlc, during playback of the file in the statistics window.
Can anyone suggest the best way to bootstrap the coding of such an analyser? Is there a library that provides an API to such information that people know of? Perhaps a Python wrapper for ffmpeg or equivalent tool that processes video files and can thereby extract such statistics.
What I am really aiming for is a CSV format file containing the seconds offset and the average or actual bitrate in KiB/s at that offset into the asset.
Update:
I built pyffmpeg and wrote the following spike:
import pyffmpeg
reader = pyffmpeg.FFMpegReader(False)
reader.open("/home/mark/Videos/BBB.m2ts", pyffmpeg.TS_VIDEO)
tracks=reader.get_tracks()
# Called for each frame
def obs(f):
pass
tracks[0].set_observer(obs)
reader.run()
But observing frame information (f) in the callback does not appear to give me any hooks to calculate per second bitrates. In fact bitrate calculations within pyffmpeg are measured across the entire file (filesize / duration) and so the treatment within the library is very superficial. Clearly its focus is on extract i-frames and other frame/GOP specific work.
Something like these:
http://code.google.com/p/pyffmpeg/
http://pymedia.org/
You should be able to do this with gstreamer. http://pygstdocs.berlios.de/pygst-tutorial/seeking.html has an example of a simple media player. It calls
pos_int = self.player.query_position(gst.FORMAT_TIME, None)[0]
periodically. All you have to do is call query_position() a second time with gst.FORMAT_BYTES, do some simple math, and voila! Bitrate vs. time.