I've searched a function in OpenCV (cv::videostab), that would allow me to do video stabilization in Real-Time. But as I understand in OpenCV this is not yet available. So TwoPassStabilizer(OnePassStabilizer) require a whole video at once and not two consecutive frames.
Ptr<VideoFileSource> source = makePtr<VideoFileSource>(inputPath); //it's whole video
TwoPassStabilizer *twopassStabilizer = new TwoPassStabilizer();
twoPassStabilizer->setFrameSource(source);
So I have to do this without the OpenCV video stabilization class. This is true?
OpenCV library does not provide exclusive code/module for real-time video stabilization.
Being said that, If you're using python code then you can use my powerful & threaded VidGear Video Processing python library that now provides real-time Video Stabilization with minimalistic latency and at the expense of little to no additional computational power requirement with Stabilizer Class. Here's a basic usage example for your convenience:
# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import WriteGear
import cv2
stream = VideoGear(source=0, stabilize = True).start() # To open any valid video stream(for e.g device at 0 index)
# infinite loop
while True:
frame = stream.read()
# read stabilized frames
# check if frame is None
if frame is None:
#if True break the infinite loop
break
# do something with stabilized frame here
cv2.imshow("Stabilized Frame", frame)
# Show output window
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.stop()
# safely close video stream
More advanced usage can be found here: https://github.com/abhiTronix/vidgear/wiki/Real-time-Video-Stabilization#real-time-video-stabilization-with-vidgear
We created a module for video stabilization by fixing a coordinate system. It's open-source.
https://github.com/RnD-Oxagile/EvenVizion
Related
Does anyone know a way to create a camera source using python? So for example I have 2 cameras:
Webcam
USB Cam
Whenever I use any web application or interface which requires camera, I have an option to select a camera source from the above two mentioned.
What I want to achieve is that I am processing real time frames in my python program with a camera portal of my own like this:
import numpy as np
import cv2
while True:
_,frame = self.cam.read()
k = cv2.waitKey(1)
if k & 0xFF == ord('q'):
self.cam.release()
cv2.destroyAllWindows()
break
else:
cv2.imshow("Frame",frame)
Now I want to use this frame as a camera source so next time whenever I open a software or web app which requires camera then it show option as follows:
Webcam
USB Cam
Python Cam (Or whatever be the name)
Does anyone has any advice or hint on how to go about that? I have seen some premium softwares which generate their own camera source but they're written in c++. I was wondering if this could happen in python or not.
Here is an example of the same:
As you can see there are multiple camera source there. I want to add one camera source which displays the frames processed by python in its feed.
You can use v4l2loopback (https://github.com/umlaeute/v4l2loopback) for that. It is written in C but there are some python wrappers. I've used virtualvideo (https://github.com/Flashs/virtualvideo).
The virtualvideo README is pretty straight forward but here is my modification of their github example to match your goal:
import virtualvideo
import cv2
class MyVideoSource(virtualvideo.VideoSource):
def __init__(self):
self.cam = cv2.VideoCapture(0)
_, img = self.cam.read()
size = img.shape
#opencv's shape is y,x,channels
self._size = (size[1],size[0])
def img_size(self):
return self._size
def fps(self):
return 30
def generator(self):
while True:
_, img = self.cam.read()
yield img
vidsrc = MyVideoSource()
fvd = virtualvideo.FakeVideoDevice()
fvd.init_input(vidsrc)
fvd.init_output(2, 640, 480, pix_fmt='yuyv422')
fvd.run()
in init_output the first argument is the virtual camera resource thar i've created when adding the kernel module:
sudo modprobe v4l2loopback video_nr=2 exclusive_caps=1
the last arguments are my webcam size and the pixel format.
You should see this option in hangouts:
I'm having problems with using OpenCV, Python, Tkinter and PiCamera in a program.
A Tkinter window is used to display and set the values to be used in OpenCV:
I am trying to continuously read and process the video feed from PiCamera currently I am using:
while True:
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
root.update_idletasks()
But after some reading on internet I found that using update() is not advisable, so I tried my luck to understand threading but I failed. There are a lot of examples with VideoCapture() which is used with USB cameras but not a lot with PiCamera. Is there any other way than threading?
You can use root.after(...). Below is a sample code:
# define a variable used to stop the image capture
do_image_capture = True
def capture_image():
if do_image_capture:
camera.capture(rawCapture, format='bgr', use_video_port=True)
# do whatever you want on the captured data
...
root.after(100, capture_image) # adjust the first argument to suit your case
capture_image()
Below sample code is using thread:
import threading
stop_image_capture = False
def capture_image():
for frame in camera.capture_continuous(rawCapture, format='bgr', use_video_port=True)
# do whatever you want on the capture image
....
if stop_image_capture:
break
t = threading.Thread(target=capture_image)
t.setDaemon(True)
t.start()
Basics
Trying to capture video from Logitech Brio camera (can support up to 60fps at 1080p -- tested in VLC), but I cannot seem to be able to get higher than 30 fps by using OpenCV VideoCapture.
My System & Settings
Info about what I have and work with:
Camera: Logitech Brio
Language: python
opencv version: 4.0
Windows 10 with powershell (this is needed because my main program interacts with other windows-based applications)
Codec-related settings:
Fourcc = MJPEG (as other ones don't support the resolutions I need for my camera for some reason)
the following code is used to open the camera: cv2.VideoCapture(cv2.CAP_DSHOW + 1)
Some background and Code
I have used a threading approach, as suggested here: OpenCV VideoCapture lag due to the capture buffer
Basically my current code has 2 threads: 1 for the grab() function, and 1 for the retrieve() function and saving the retrieved image as an avi.
the thread that runs the grab() function looks something like this:
def __frame_grab_worker(self, cap, cameraNumber):
try:
while(cap.isOpened()):
# sync with retrieve
acquired = self._retrievingLock[cameraNumber].acquire()
try:
cap.grab()
finally:
self._retrievingLock[cameraNumber].release()
if(self.interrupt == True):
break
finally:
self._cleanupLock.release()
the thread that retrieves the image and saves it looks something like this:
def __record(self, filename, vidWidth, vidHeight, fps, cap, cameraNumber):
if(self.numberOfCams > 0):
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
out = cv2.VideoWriter(filename, fourcc , fps, (int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while(cap.isOpened()):
self._retrievingLock[cameraNumber].acquire()
ret, frame = cap.retrieve()
self._retrievingLock[cameraNumber].release()
if ret ==True:
if(self.recordingFlag==True):
out.write(frame)
cv2.imshow(str(cameraNumber), frame)
# stop if interrupt flag presented
if cv2.waitKey(1) & self.interrupt == True:
break
cap.release()
out.release()
cv2.destroyAllWindows()
Things I've tried
I profiled my code. In general, between each loop of the while look in __record there is about 0.03 sec delay, which is equal to 30 fps. This is equally split between cap.retrieve() and the write and imshow functions.
I noticed that if I comment out the parts of the code that do any processing on the retrieved image (i.e. out.write(frame) or cv2.imshow(...) ) the time taken for the cap.retrieve() function increases to 0.03 (from previously around 0.15).
So I assume that the problem is to do with either DSHOW with MJPEG not allowing more than 30fps, or a problem with the underlying opencv codes.
I further looked over the opencv codebase (specifically videoio/src/cap_dshow.cpp & videoio/src/cap.cpp) and couldn't find anything useful... I might have missed something.
If anyone has a solution or know anything that could help I would appreciate it
Thanks in advance :)
I want to create a webcam streaming app that records webcam stream for, say about 30 seconds, and save it as myFile.wmv. Now To get live camera feed I know this code :-
import cv2
import numpy as np
c = cv2.VideoCapture(0)
while(1):
_,f = c.read()
cv2.imshow('e2',f)
if cv2.waitKey(5)==27:
break
cv2.destroyAllWindows()
But I have no idea off how to record for a given number of seconds and save it as a file in its current directory,
Please someone point me to the right direction
Thanks
ABOUT TIME
Why do not use the python time function? In particular time.time() Look at this answer about time in python
NB OpenCV should have (or had) its own timer but I can not tell you for sure if it works in current versions.
ABOUT RECORDING/SAVING
Look at this other answer
OpenCV allows you to record video, but not audio. There is this script I came across from JRodrigoF that uses openCV to record video and pyaudio to record audio. I used it for a while on a similar project; however, I noticed that sometimes the threads would hang and it would cause the program to crash. Another issue is that openCV does not capture video frames at a reliable rate and ffmpeg would distort the video when re-encoding.
https://github.com/JRodrigoF/AVrecordeR
I came up with a new solution that records much more reliably and with much higher quality. It presently only works for Windows because it uses pywinauto and the built-in Windows Camera app. The last bit of the script does some error-checking to confirm the video successfully recorded by checking the timestamp of the name of the video.
https://gist.github.com/mjdargen/956cc968864f38bfc4e20c9798c7d670
import pywinauto
import time
import subprocess
import os
import datetime
def win_record(duration):
subprocess.run('start microsoft.windows.camera:', shell=True) # open camera app
# focus window by getting handle using title and class name
# subprocess call opens camera and gets focus, but this provides alternate way
# t, c = 'Camera', 'ApplicationFrameWindow'
# handle = pywinauto.findwindows.find_windows(title=t, class_name=c)[0]
# # get app and window
# app = pywinauto.application.Application().connect(handle=handle)
# window = app.window(handle=handle)
# window.set_focus() # set focus
time.sleep(2) # have to sleep
# take control of camera window to take video
desktop = pywinauto.Desktop(backend="uia")
cam = desktop['Camera']
# cam.print_control_identifiers()
# make sure in video mode
if cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").exists():
cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(1)
# start then stop video
cam.child_window(title="Take Video", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(duration+2)
cam.child_window(title="Stop taking Video", auto_id="CaptureButton_1", control_type="Button").click()
# retrieve vids from camera roll and sort
dir = 'C:/Users/michael.dargenio/Pictures/Camera Roll'
all_contents = list(os.listdir(dir))
vids = [f for f in all_contents if "_Pro.mp4" in f]
vids.sort()
vid = vids[-1]
# compute time difference
vid_time = vid.replace('WIN_', '').replace('_Pro.mp4', '')
vid_time = datetime.datetime.strptime(vid_time, '%Y%m%d_%H_%M_%S')
now = datetime.datetime.now()
diff = now - vid_time
# time different greater than 2 minutes, assume something wrong & quit
if diff.seconds > 120:
quit()
subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True) # close camera app
print('Recorded successfully!')
win_record(2)
I am trying to capture image from a webcam when a key is hit. Following code is successful
import cv
cv.NamedWindow("w1")
camera = cv.CaptureFromCAM(-1)
while True:
key = cv.WaitKey(0);
if key == 'q':
break;
image = cv.QueryFrame(camera)
cv.ShowImage("w1", image)
cv.DestroyWindow("w1")
It works fine for the first keypress. For the next keydown it shows a frame very close to the first one even if you moved. After several key presses it changes to the actual image. What i can infer is that, there is some kind of buffer where frames are stored
I am wondering if someone could please help me in getting the precise frame when a key is being hit.
I am using opencv with interface to python. The operating system is ubuntu 11.04. The calls for capture frame are sent to v4l library. I have an integrated webcam with my dell laptop.
I am wondering if someone can help me with this issue.
Thanks a lot
I suggest you try it in a slightly different manner:
import cv
cv.NamedWindow("w1")
camera = cv.CaptureFromCAM(-1)
while True:
image = cv.QueryFrame(camera)
key = cv.WaitKey(33)
if key == 'q':
break
elif key != -1:
cv.ShowImage("w1", image)
cv.DestroyWindow("w1")
Notice the change in the call to cv.WaitKey(): instead of blocking on it, just wait for a reasonable time. Then check if a key was actually pressed (key != -1).
I should note that your code worked on my Mac OS X 10.6 with OpenCV 2.3.