I want to create a webcam streaming app that records webcam stream for, say about 30 seconds, and save it as myFile.wmv. Now To get live camera feed I know this code :-
import cv2
import numpy as np
c = cv2.VideoCapture(0)
while(1):
_,f = c.read()
cv2.imshow('e2',f)
if cv2.waitKey(5)==27:
break
cv2.destroyAllWindows()
But I have no idea off how to record for a given number of seconds and save it as a file in its current directory,
Please someone point me to the right direction
Thanks
ABOUT TIME
Why do not use the python time function? In particular time.time() Look at this answer about time in python
NB OpenCV should have (or had) its own timer but I can not tell you for sure if it works in current versions.
ABOUT RECORDING/SAVING
Look at this other answer
OpenCV allows you to record video, but not audio. There is this script I came across from JRodrigoF that uses openCV to record video and pyaudio to record audio. I used it for a while on a similar project; however, I noticed that sometimes the threads would hang and it would cause the program to crash. Another issue is that openCV does not capture video frames at a reliable rate and ffmpeg would distort the video when re-encoding.
https://github.com/JRodrigoF/AVrecordeR
I came up with a new solution that records much more reliably and with much higher quality. It presently only works for Windows because it uses pywinauto and the built-in Windows Camera app. The last bit of the script does some error-checking to confirm the video successfully recorded by checking the timestamp of the name of the video.
https://gist.github.com/mjdargen/956cc968864f38bfc4e20c9798c7d670
import pywinauto
import time
import subprocess
import os
import datetime
def win_record(duration):
subprocess.run('start microsoft.windows.camera:', shell=True) # open camera app
# focus window by getting handle using title and class name
# subprocess call opens camera and gets focus, but this provides alternate way
# t, c = 'Camera', 'ApplicationFrameWindow'
# handle = pywinauto.findwindows.find_windows(title=t, class_name=c)[0]
# # get app and window
# app = pywinauto.application.Application().connect(handle=handle)
# window = app.window(handle=handle)
# window.set_focus() # set focus
time.sleep(2) # have to sleep
# take control of camera window to take video
desktop = pywinauto.Desktop(backend="uia")
cam = desktop['Camera']
# cam.print_control_identifiers()
# make sure in video mode
if cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").exists():
cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(1)
# start then stop video
cam.child_window(title="Take Video", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(duration+2)
cam.child_window(title="Stop taking Video", auto_id="CaptureButton_1", control_type="Button").click()
# retrieve vids from camera roll and sort
dir = 'C:/Users/michael.dargenio/Pictures/Camera Roll'
all_contents = list(os.listdir(dir))
vids = [f for f in all_contents if "_Pro.mp4" in f]
vids.sort()
vid = vids[-1]
# compute time difference
vid_time = vid.replace('WIN_', '').replace('_Pro.mp4', '')
vid_time = datetime.datetime.strptime(vid_time, '%Y%m%d_%H_%M_%S')
now = datetime.datetime.now()
diff = now - vid_time
# time different greater than 2 minutes, assume something wrong & quit
if diff.seconds > 120:
quit()
subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True) # close camera app
print('Recorded successfully!')
win_record(2)
Related
I am looking for a good and simple solution to record both audio and video from my Logitech webcam using python.
I tried using ffmpeg but I can't get it to work well.
also, I am using this on windows so the solution should work on windows.
Use ffmpeg.
List devices using dshow (DirectShow) input:
ffmpeg -list_devices true -f dshow -i dummy
Example command to capture video and audio:
ffmpeg -f dshow -i video="Camera name here":audio="Microphone name here" -vf format=yuv420p output.mp4
See dshow documentation and FFmpeg Wiki: DirectShow for more info and examples.
As mentioned above, there is a solution from JRodrigoF that uses openCV to record video and pyaudio to record audio. I used it for a while on a project; however, I noticed that sometimes the threads would hang and it would cause the program to crash. Another issue is that openCV does not capture video frames at a reliable rate and ffmpeg would distort the video when re-encoding.
I came up with a new solution that records much more reliably and with much higher quality. However, it will only work on Windows because it uses pywinauto and the built-in Windows Camera app. The last bit of the script does some error-checking to confirm the video successfully recorded by checking the timestamp of the name of the video.
https://gist.github.com/mjdargen/956cc968864f38bfc4e20c9798c7d670
import pywinauto
import time
import subprocess
import os
import datetime
def win_record(duration):
subprocess.run('start microsoft.windows.camera:', shell=True) # open camera app
# focus window by getting handle using title and class name
# subprocess call opens camera and gets focus, but this provides alternate way
# t, c = 'Camera', 'ApplicationFrameWindow'
# handle = pywinauto.findwindows.find_windows(title=t, class_name=c)[0]
# # get app and window
# app = pywinauto.application.Application().connect(handle=handle)
# window = app.window(handle=handle)
# window.set_focus() # set focus
time.sleep(2) # have to sleep
# take control of camera window to take video
desktop = pywinauto.Desktop(backend="uia")
cam = desktop['Camera']
# cam.print_control_identifiers()
# make sure in video mode
if cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").exists():
cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(1)
# start then stop video
cam.child_window(title="Take Video", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(duration+2)
cam.child_window(title="Stop taking Video", auto_id="CaptureButton_1", control_type="Button").click()
# retrieve vids from camera roll and sort
dir = 'C:/Users/michael.dargenio/Pictures/Camera Roll'
all_contents = list(os.listdir(dir))
vids = [f for f in all_contents if "_Pro.mp4" in f]
vids.sort()
vid = vids[-1]
# compute time difference
vid_time = vid.replace('WIN_', '').replace('_Pro.mp4', '')
vid_time = datetime.datetime.strptime(vid_time, '%Y%m%d_%H_%M_%S')
now = datetime.datetime.now()
diff = now - vid_time
# time different greater than 2 minutes, assume something wrong & quit
if diff.seconds > 120:
quit()
subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True) # close camera app
print('Recorded successfully!')
win_record(2)
I am trying to save 10 seconds of buffered video using Python script, in particular '.rgb' format.
In order to do so, I have been using a PiCamera connected to a Raspberry Pi.
Based on the script below, if I choose to save the video using h264 format, I will be able to accomplish the desired goal successfully but if change the format from h264 to .rgb (targeted format), no outputs are generated.
Any thoughts what might be the issue here?
Thanks
Code snap:
import time
import io
import os
import picamera
import datetime as dt
from PIL import Image
import cv2
#obtain current time
def return_currentTime():
return dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
#trigger event declaration
def motion_detected():
while True:
print ("Trigger event(y)?")
trigger = input ()
if trigger =="y":
time = return_currentTime()
print ("Buffering...")
camera.wait_recording(5)
stream.copy_to(str(time)+'.rgb')
else:
camera.stop_recording()
break
#countdown timer
def countdown (t):
while t:
mins, secs = divmod (t,60)
timer = '{:02d}:{:02d}'.format(mins, secs)
print(timer, end="\r")
time.sleep(1)
t-=1
print('Buffer available!')
camera = picamera.PiCamera()
camera.resolution = (640, 480)
stream = picamera.PiCameraCircularIO(camera, seconds = 5)
#code will work using h264 as format
camera.start_recording (stream, format = 'rgb')
countdown(5)
motion_detected()
This question has to do with your stream format and how stream.copy_to() works.
According to the docs for the function copy_to(output, size=None, seconds=None, first_frame=2), first_frame is a restriction on the first frame to be copied. This is set to sps_header by default, which is usually the first frame of an H264 stream.
Since your stream format is RGB instead of H264, though, there are no sps_header's, and so copy_to cannot find an sps_header and copies nothing.
To solve this, you have to allow any frame to be the first frame, not just sps_header's. This can be done by setting first_frame=None in your call, like copy_to(file, first_frame=None).
I've searched a function in OpenCV (cv::videostab), that would allow me to do video stabilization in Real-Time. But as I understand in OpenCV this is not yet available. So TwoPassStabilizer(OnePassStabilizer) require a whole video at once and not two consecutive frames.
Ptr<VideoFileSource> source = makePtr<VideoFileSource>(inputPath); //it's whole video
TwoPassStabilizer *twopassStabilizer = new TwoPassStabilizer();
twoPassStabilizer->setFrameSource(source);
So I have to do this without the OpenCV video stabilization class. This is true?
OpenCV library does not provide exclusive code/module for real-time video stabilization.
Being said that, If you're using python code then you can use my powerful & threaded VidGear Video Processing python library that now provides real-time Video Stabilization with minimalistic latency and at the expense of little to no additional computational power requirement with Stabilizer Class. Here's a basic usage example for your convenience:
# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import WriteGear
import cv2
stream = VideoGear(source=0, stabilize = True).start() # To open any valid video stream(for e.g device at 0 index)
# infinite loop
while True:
frame = stream.read()
# read stabilized frames
# check if frame is None
if frame is None:
#if True break the infinite loop
break
# do something with stabilized frame here
cv2.imshow("Stabilized Frame", frame)
# Show output window
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.stop()
# safely close video stream
More advanced usage can be found here: https://github.com/abhiTronix/vidgear/wiki/Real-time-Video-Stabilization#real-time-video-stabilization-with-vidgear
We created a module for video stabilization by fixing a coordinate system. It's open-source.
https://github.com/RnD-Oxagile/EvenVizion
I am working on a project where we are using the Raspicam attached to a Raspberry Pi to capture (and process) images with python using the PiCamera module.
With our current implementation I am experiencing an unexpected behaviour.
The camera is configured to capture images with 15 frames per second. To simulate processing time the program waits 5 seconds.
Minimal example:
#!/usr/bin/env python
import cv2
from picamera import PiCamera
from picamera.array import PiRGBArray
class RaspiCamera:
def __init__(self, width, height, framerate):
self.camera = PiCamera()
self.camera.resolution = (width, height)
self.camera.framerate = framerate
self.rawCapture = PiRGBArray(self.camera, size=self.camera.resolution)
self.capture_continuous = self.camera.capture_continuous(self.rawCapture, format="bgr", use_video_port=True)
def capture(self):
frame = self.capture_continuous.next()
image = self.rawCapture.array
self.rawCapture.truncate(0)
return image
if __name__ == "__main__":
camera = RaspiCamera(640, 480, 15)
while True:
frame = camera.capture()
cv2.imshow("Image", frame)
if cv2.waitKey(5000) & 0xFF == ord('q'):
break
When capture() is called for the first time, self.capture_continuous.next() returns an up to date image. When calling capture() consecutively, it often happens that self.capture_continuous.next() does not return the latest image but one that is already a few seconds old (verified by pointing the camera at a clock). From time to time, it's even older than 10 seconds. On the other hand, sometimes self.capture_continuous.next() actually returns the latest image.
Since capture_continuous is an object of the type generator, my assumption is that it keeps generating camera images in the background that accumulate in a queue while the program waits and on the next call of self.capture_continuous.next() the next element in the queue is returned.
Anyway, I am only interested in the latest, most up to date image the camera has captured.
Some first attempts to get hold of the latest images failed. I tried to call self.capture_continuous.next() repeatedly in a while loop to get to the latest image.
Since a generator is apparently also an iterator I tried some methods mentioned in this post: Cleanest way to get last item from Python iterator.
Simply using the capture() function of the PiCamera class itself is not an option since it takes approx. 0.3 seconds till the image is captured what is too much for our use case.
Does anyone have a clue what might cause the delay described above and how it could be avoided?
I am familiar with programming but not with python or linux. I am programming in python on a raspberry pi trying to create a security camera. Here is my code to test my current problem:
#!/usr/bin/python
import pygame, sys
from pygame.locals import *
from datetime import datetime
import pygame.camera
import time
pygame.init()
pygame.camera.init()
width = 640
height = 480
pic_root = "/root/cam/"
cam = pygame.camera.Camera("/dev/video0",(width,height),"RGB")
cam.start()
while True:
raw_input("press enter")
image = cam.get_image()
filename = datetime.now().strftime("%Y_%m_%d_%H_%M_%S") +'.jpg'
filepath = pic_root+filename
pygame.image.save(image, filepath)
So when I press enter, an image is taken from the webcam and saved. But the image is always two images behind. No matter how long in between saving images, the first two are always very dim as if the webcam has just started up, then the rest are always two images late.
So if I took 5 images, one with one finger up, then next with two fingers, etc, I would end up with two dark images and then the first three images. 1,2 and 3 fingers. It is as if the images are being stored somewhere then when I try to save a live images it pulls up an old one.
Am I missing something here? What's the issue?
First, I'm not familiar with Pygame (but I do a lot of snapshot capturing with OpenCV -- here's one of my projects: http://vmlaker.github.io/wabbit.)
I changed your code so that on every iteration, you 1) start, 2) take snapshot, and 3) stop the camera. This works a little better, in that it is only one image behind (instead of two.) It's still pretty weird how the old image sticks around from the previous run... I haven't figured out how to flush the camera. Notice I also changed pic_root, and instead of the infinite loop I'm using 3 iterations only:
from datetime import datetime
import pygame
import pygame.camera
pygame.init()
pygame.camera.init()
width = 640
height = 480
pic_root = './'
cam = pygame.camera.Camera("/dev/video0",(width,height),"RGB")
#cam.start()
for ii in range(3):
raw_input("press enter")
cam.start()
image = cam.get_image()
cam.stop()
filename = datetime.now().strftime("%Y_%m_%d_%H_%M_%S") +'.jpg'
filepath = pic_root+filename
pygame.image.save(image, filepath)
The OP's comment helped, but I actually have to pull the picture three times with get_image() before saving.
I also have a wakeup function which I call after a long standby time to wake the camera. Mine has the behavior to be black after a long time.
I guess, all this weird stuff has something to do with a buffer. But the multiple call did the trick for me.