how to let OpenCV save the last few seconds of analyzed video - python

I am trying to program a python OpenCV app for my own use because I can't go to gyms for some time. I would like to do the following:
Capture frames from a video flow using OpenCV [ done ]
Have OpenCV track a yellow soccer and return the coordinate of this soccer in the frame [done]
Come up with an algorithm to detect when a soccer juggling failed, for example the soccer went out of frame and so on [ done ]
Now my question is: let's say I want to save the "10 seconds right before this event" of video into a mp4 file. How should I do it? Is there any good template that I can follow?
Thanks!

You may create a memory buffer worth of 10sec of video (~about 300 frames for most web-cameras), then save frames to that buffer, removing the old ones while adding the new ones.
Once your ball is out of the frame -- open a video file, and save your frames from the buffer.

Related

Why opencv video reading fps is not same with video encrypted fps?

In OpenCV with Python, when the fps of the webcam and a video file in the directory are same, why does the video file play in fast forward whereas the webcam continues to show the frames at a normal rate? What role does the cv2.waitKey() function play here
The fps of a video file means how it was encrypted, how many frames contain within a second as the name reveals. For example, if extracted 1 second of this video will produce only that number of frames (images).
The corresponding fps of the web camera means how many frames that camera can capture in a second. If saved to a video file that would mean how many frames are contained within each 1-second span.
There is a third (probably hidden to you) concept here though. How fast the opencv can read a video file. Normally, and for typical resolutions on a modern computer this fps is larger than the actual video. So, your computer seem to playback the video in fast forward mode because it reads (and displays) frames in a faster pace than the video file's fps.
Theoretically, you can calculate the delay you should import to the video playback to force it to displayed with normal pace. I am not sure how easily you can accomplish that (in a scientific way and not trial and error mode).
Hope this clarifies the issue.

Deleting frames from a video as I add new ones

I'm detecting if the camera can see an object in openCV. I want to record the last 10 seconds the object was in scene. I thought about making 2-3 separate videos and deleting and rewriting as it goes. How would that be performance wise? Or is there a way I can keep one video at a set number of frames? I should add that I'm pretty new to openCV

Sampling video and making image cutoffs in python

I've got a videostream (for now I just use a video). I need to get a one frame per every second or more seconds and I need to cut some part of these pictures based on 8 coordinates(leftupper x/y, rightupper x/y, rightlower x/y and leftlower x/y).
I thinkg that I'm able to do that cutting in java but I would rather do it in python as entire application is written in python / django.
It's possible to do both of that things directly in python?
Could you point me to some documentation or whatever?
You can start with some Video Handling in Python Using OpenCV
Python : Reading Video File and Saving Video File using OpenCV
It contains all the basic links like Reading from File and Camera , that gives a initial idea of how to process Frames .
Then after you get Each Frame in OpenCV mat , you can form a Bounding Box Rect to extract the Region or ROI from that
Close to this Question
Cropping Live Video Feed
Cropping a Single Frame can be done as done in
Cropping a Single Image in OpenCV Python
This can be repeated for every Frame and you can even Write to a Video File taking reference from First Reference.

Possible to delete frames from a video file in opencv

I can open a video and play it with opencv 2 using the cv2.VideoCapture(myvideo). But is there a way to delete a frame within that video using opencv 2? The deletion must happen in-place, that is, the file being played will end up with a shorter time due to deleted frames. Simply zeroing out the matrix wouldn't be sufficient.
For example something like:
video = cv2.VideoCapture(myvideo.flv)
while True:
img = video.read()
# Show the image
cv2.imgshow(img)
# Then go delete it and proceed to next frame, but is this possible?
# delete(img)??
So the above code would technically contain 0 bytes at the end since it reads then deletes the frame in the video file.
OpenCV is not the right tool for this job. What you need for this is a media processing framework, like ffmpeg (=libavformat/libavcodec/libswscale) or GStreamer.
Also depending on the encoding scheme used, simply deleting just a single frame may not be possible. Only in a video consisting of just Intra frames (I-frames), frame exact editing is possible. If the video is encoding in so called group of pictures (GOP) removing a single frame requires to reencode the whole GOP it was part of.
You can't do it in-place, but you can use OpenCV's VideoWriter to write the frames that you want in a new video file.

The simplest video streaming?

I have a camera that is taking pictures one by one (about 10 pictures per second) and sending them to PC. I need to show this incoming sequence of images as a live video in PC.
Is it enough just to use some Python GUI framework, create a control that will hold a single image and just change the image in the control very fast?
Or would that be just lame? Should I use some sort of video streaming library? If yes, what do you recommend?
Or would that be just lame?
No. It wouldn't work at all.
There's a trick to getting video to work. Apple's QuickTime implements that trick. So does a bunch of Microsoft product. Plus some open source video playback tools.
There are several closely-related tricks, all of which are a huge pain in the neck.
Compression. Full-sized video is Huge. Do the math 640x480x24-bit color at 30 frames per second. It adds up quickly. Without compression, you can't read it in fast enough.
Buffering and Timing. Sometimes the data rates and frame rates don't align well. You need a buffer of ready-to-display frames and you need a deadly accurate clock to get them do display at exactly the right intervals.
Making a sequence of JPEG images into a movie is what iPhoto and iMovie are for.
Usually, what we do is create the video file from the image and play the video file through a standard video player. Making a QuickTime movie or Flash movie from images isn't that hard. There are a lot of tools to help make movies from images. Almost any photo management solution can create a slide show and save it as a movie in some standard format.
Indeed, I think that Graphic Converter can do this.

Categories