I've got a videostream (for now I just use a video). I need to get a one frame per every second or more seconds and I need to cut some part of these pictures based on 8 coordinates(leftupper x/y, rightupper x/y, rightlower x/y and leftlower x/y).
I thinkg that I'm able to do that cutting in java but I would rather do it in python as entire application is written in python / django.
It's possible to do both of that things directly in python?
Could you point me to some documentation or whatever?
You can start with some Video Handling in Python Using OpenCV
Python : Reading Video File and Saving Video File using OpenCV
It contains all the basic links like Reading from File and Camera , that gives a initial idea of how to process Frames .
Then after you get Each Frame in OpenCV mat , you can form a Bounding Box Rect to extract the Region or ROI from that
Close to this Question
Cropping Live Video Feed
Cropping a Single Frame can be done as done in
Cropping a Single Image in OpenCV Python
This can be repeated for every Frame and you can even Write to a Video File taking reference from First Reference.
Related
I've been using OpenCV and MoviePy to get images out of a video (1 image per second) and once extracted, I analyze the image with pytesseract. The part where the script extract images takes quite a bit of time. Is it possible or is there a function that I've overlooked in MoviePy or OpenCV that allows video frames to be analyzed without having to create images first? This could tremendously speed up the process.
Current steps:
Scan and extract 1fps with a specific video as argument
From each of those images, perform analysis on a specific area
Desired:
Perform analysis on a specific area of the video itself at 1 fps.
If this function exists, please inform me. Otherwise, would there be a workaround for this? Suggestions?
Thanks!!
Good evening,
I have a series of vtk files.
I wrote a script that performs a specific rotation of those files and then using
animationScene1.GoToNext()
goes to the next scene, a new rotation is performed e.t.c.
How can i also add the newly created scene to an video format output?
In short, i want every time the script jumps to a new scene and rotation to save the old one and in the end have a video with the correct rotation per scene
Thank you very much for your time
Since you are moving the camera to specific position using a python script, the easiest way will be to save the images yourself in your script using
SaveScreenshot("path/to/your/imageN.png")
And then reconstruct a video from all the images.
I'm kind of new to python, so please bear with me. I am having trouble displaying videos inside a TKinter frame. What i want to do is play a group of videos inside a frame in TK.
For example: i have 3 videos named a.mp4, b.mp4, and c.mp4
i want them to play inside a frame without it reloading(closing, then play the next video)
i have tried OpenCV, but what it does is play a.mp4, closes, then plays b.mp4
Any help would be much appreciated, i have been stuck here for days
You can concatenate videos horizontally or vertically or in a grid to a single video file using ffmpeg or manually in python.
You can use ffmpeg outside python to concatenate videos into single video and show as a single video:
Vertically stack several videos using ffmpeg?
Or you can make a single video by concatenating those videos horizontally or vertically in python itself. e.g. use scikit-video or opencv to load videos into different arrays and concatenate horizontally or vertically or in grid and save as a single video.
I can open a video and play it with opencv 2 using the cv2.VideoCapture(myvideo). But is there a way to delete a frame within that video using opencv 2? The deletion must happen in-place, that is, the file being played will end up with a shorter time due to deleted frames. Simply zeroing out the matrix wouldn't be sufficient.
For example something like:
video = cv2.VideoCapture(myvideo.flv)
while True:
img = video.read()
# Show the image
cv2.imgshow(img)
# Then go delete it and proceed to next frame, but is this possible?
# delete(img)??
So the above code would technically contain 0 bytes at the end since it reads then deletes the frame in the video file.
OpenCV is not the right tool for this job. What you need for this is a media processing framework, like ffmpeg (=libavformat/libavcodec/libswscale) or GStreamer.
Also depending on the encoding scheme used, simply deleting just a single frame may not be possible. Only in a video consisting of just Intra frames (I-frames), frame exact editing is possible. If the video is encoding in so called group of pictures (GOP) removing a single frame requires to reencode the whole GOP it was part of.
You can't do it in-place, but you can use OpenCV's VideoWriter to write the frames that you want in a new video file.
I want to sift through a collection of video files looking for a certain logo, and then record the 10-15 seconds leading up to it. I can recognize the logo by checking a certain pixel color.
How would you do it? Is there software or a python package that allows me to extract those chunks of files and write those into a new video?
What i have done so far:
I have found a library that is able to convert a video into a series of BMPs. What the programs. Its called pyMedia: http://pymedia.org/tut/src/dump_video.py.html and the reverse: http://pymedia.org/tut/src/make_video.py.html
So thats pretty neat. However its only working with python 2.3, not with python 3.
Seems like :
d= e.encode( yuvFrame )
fw.write( d )
writes a BMP file. So how do i look in for a certain colored pixel or logo in a BMP file and put it together? Thats what i cant get working somehow. Maybe someone can help me with this.
edit:
let me show you what i have done so far:
from PIL import Image
im = Image.open("bride.bmp")
i=0
width= 10
height = 10
top= 461
box = (left, top, left+width, top+height)
croppy=im.crop(box)
if (mycolor in croppy.getcolors()):
print "Logo found"
My logo has a certain color, so this looks for the pixel and prints logo found if the pixelcolor is found. Didnt really want to make a classifier for that.
Using perhaps OpenCV or another package -- essentially, you want to train a classifier to identify your logo and then feed it the bitmaps from your video. When it identifies the logo, then you trigger the code which captures the previous 15s of video.
This is a very detailed answer about how one might do this in Python General approach to developing an image classification algorithm for Dilbert cartoons