Does anyone know a way to create a camera source using python? So for example I have 2 cameras:
Webcam
USB Cam
Whenever I use any web application or interface which requires camera, I have an option to select a camera source from the above two mentioned.
What I want to achieve is that I am processing real time frames in my python program with a camera portal of my own like this:
import numpy as np
import cv2
while True:
_,frame = self.cam.read()
k = cv2.waitKey(1)
if k & 0xFF == ord('q'):
self.cam.release()
cv2.destroyAllWindows()
break
else:
cv2.imshow("Frame",frame)
Now I want to use this frame as a camera source so next time whenever I open a software or web app which requires camera then it show option as follows:
Webcam
USB Cam
Python Cam (Or whatever be the name)
Does anyone has any advice or hint on how to go about that? I have seen some premium softwares which generate their own camera source but they're written in c++. I was wondering if this could happen in python or not.
Here is an example of the same:
As you can see there are multiple camera source there. I want to add one camera source which displays the frames processed by python in its feed.
You can use v4l2loopback (https://github.com/umlaeute/v4l2loopback) for that. It is written in C but there are some python wrappers. I've used virtualvideo (https://github.com/Flashs/virtualvideo).
The virtualvideo README is pretty straight forward but here is my modification of their github example to match your goal:
import virtualvideo
import cv2
class MyVideoSource(virtualvideo.VideoSource):
def __init__(self):
self.cam = cv2.VideoCapture(0)
_, img = self.cam.read()
size = img.shape
#opencv's shape is y,x,channels
self._size = (size[1],size[0])
def img_size(self):
return self._size
def fps(self):
return 30
def generator(self):
while True:
_, img = self.cam.read()
yield img
vidsrc = MyVideoSource()
fvd = virtualvideo.FakeVideoDevice()
fvd.init_input(vidsrc)
fvd.init_output(2, 640, 480, pix_fmt='yuyv422')
fvd.run()
in init_output the first argument is the virtual camera resource thar i've created when adding the kernel module:
sudo modprobe v4l2loopback video_nr=2 exclusive_caps=1
the last arguments are my webcam size and the pixel format.
You should see this option in hangouts:
Related
My current program is in Python and uses OpenCV. I rely on webcam captures and I am processing every captured frame:
import cv2
# use the webcam
cap = cv2.VideoCapture(0)
while True:
# read a frame from the webcam
ret, img = cap.read()
# transform image
I would like to make a Kivy interface (or another graphical user interface) with buttons, keeping already existing functionality with webcam captures.
I found this example:
https://kivy.org/docs/examples/gen__camera__main__py.html
— but it doesn’t explain how to acquire the webcam image to process it with OpenCV.
I found an older example:
http://thezestyblogfarmer.blogspot.it/2013/10/kivy-python-script-for-capturing.html
— it saves screenshots to disk using the ‘screenshot’ function. Then I can read the saved files and process them, but this seems to be an unnecessary step.
What else can I try?
Found this example here: https://groups.google.com/forum/#!topic/kivy-users/N18DmblNWb0
It converts the opencv captures to kivy textures, so you can do every kind of cv transformations before displaying it to your kivy interface.
__author__ = 'bunkus'
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.image import Image
from kivy.clock import Clock
from kivy.graphics.texture import Texture
import cv2
class CamApp(App):
def build(self):
self.img1=Image()
layout = BoxLayout()
layout.add_widget(self.img1)
#opencv2 stuffs
self.capture = cv2.VideoCapture(0)
cv2.namedWindow("CV2 Image")
Clock.schedule_interval(self.update, 1.0/33.0)
return layout
def update(self, dt):
# display image from cam in opencv window
ret, frame = self.capture.read()
cv2.imshow("CV2 Image", frame)
# convert it to texture
buf1 = cv2.flip(frame, 0)
buf = buf1.tostring()
texture1 = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt='bgr')
#if working on RASPBERRY PI, use colorfmt='rgba' here instead, but stick with "bgr" in blit_buffer.
texture1.blit_buffer(buf, colorfmt='bgr', bufferfmt='ubyte')
# display image from the texture
self.img1.texture = texture1
if __name__ == '__main__':
CamApp().run()
cv2.destroyAllWindows()
Note: I have no clue how OpenCV works, but I found camera_opencv.py, so this means there is an easy way how to work with it.
As you see in camera example, this is the default way and when you look in __init__.py for camera you can see opencv in providers so perhaps it works with OpenCV out of the box. Check log if you can see OpenCV detected as a provider. You should see CameraOpenCV written somewhere if it's detected and it should show itself when capturing image.
If you however want to work with OpenCV directly(i.e. cap.read() and similar stuff), then you need to write your own handler for the provider or append more options to the camera_opencv file.
Recently I have installed OpenCV and OpenPose for tracking the head of a 3d character created,rigged and animated in Blender 2.81 on Windows 10. When I use OpenCV I use this addon :
https://github.com/jkirsons/FacialMotionCapture
and when I use Open Pose,I have installed this addon :
https://gitlab.com/sat-metalab/blender-addon-openpose
What's the problem ? The problem is that the addon written for OpenCV works good,but the addon for Openpose doesn't. I have no python experience,but I have looked inside the addons codes to try to understand why.
Please check the relevant python code that's inside the OpenCV / Facial motion capture addon below :
# Show camera image in a window
cv2.imshow("Output", image)
# Show camera image in a window
cv2.imshow("Output", image)
cv2.waitKey(1)
return {'PASS_THROUGH'}
def init_camera(self):
if self._cap == None:
self._cap = cv2.VideoCapture(0)
self._cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.width)
self._cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.height)
self._cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
time.sleep(1.0)
Now check the relevant python code that's inside the OpenPose addon below :
class Camera:
"""
Utility class embedding a camera, its parameters and buffers
"""
def __init__(self,
path: str) -> None:
self._path = path
self._camera = cv2.VideoCapture()
self._camera.open(path)
self._shape: Tuple[int, int, int] = (0, 0, 0)
self._bbox = [180, 120, 270, 270]
self._bbox_new = self._bbox
class OpenPoseWrapper:
def __init__(self) -> None:
self._cameras: List[Camera] = []
self._image_buffer: Optional[bpy.types.Image] = None
self._camera_paths: List[str] = ['/dev/video0', '/dev/video1']
self._is_stereo = False
self._is_stereo_calibrated = False
What I want to know is the reason why OpenCV is able to grab the video file that I use,instead the OpenPose addon is not able to do that. I suspect that the code of the OpenPose addon has been written for Linux,but I'm using Windows 10. For this reason I should change this line :
self._camera_paths: List[str] = ['/dev/video0', '/dev/video1']
Infact in Windows 10 there isn't any kind of device like that. So,how should I change the code if I want that Windows 10 is able to detect the video file that the code written for OpenCV is able to detect ?
I've searched a function in OpenCV (cv::videostab), that would allow me to do video stabilization in Real-Time. But as I understand in OpenCV this is not yet available. So TwoPassStabilizer(OnePassStabilizer) require a whole video at once and not two consecutive frames.
Ptr<VideoFileSource> source = makePtr<VideoFileSource>(inputPath); //it's whole video
TwoPassStabilizer *twopassStabilizer = new TwoPassStabilizer();
twoPassStabilizer->setFrameSource(source);
So I have to do this without the OpenCV video stabilization class. This is true?
OpenCV library does not provide exclusive code/module for real-time video stabilization.
Being said that, If you're using python code then you can use my powerful & threaded VidGear Video Processing python library that now provides real-time Video Stabilization with minimalistic latency and at the expense of little to no additional computational power requirement with Stabilizer Class. Here's a basic usage example for your convenience:
# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import WriteGear
import cv2
stream = VideoGear(source=0, stabilize = True).start() # To open any valid video stream(for e.g device at 0 index)
# infinite loop
while True:
frame = stream.read()
# read stabilized frames
# check if frame is None
if frame is None:
#if True break the infinite loop
break
# do something with stabilized frame here
cv2.imshow("Stabilized Frame", frame)
# Show output window
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.stop()
# safely close video stream
More advanced usage can be found here: https://github.com/abhiTronix/vidgear/wiki/Real-time-Video-Stabilization#real-time-video-stabilization-with-vidgear
We created a module for video stabilization by fixing a coordinate system. It's open-source.
https://github.com/RnD-Oxagile/EvenVizion
I'm using a Raspberry Pi 3 to detect motion using SimpleCV, and then when motion is detected an image should be taken. I'm using a USB camera. This is the code I'm using:
from SimpleCV import *
import os
cam = Camera()
threshold = 5.0 # if mean exceeds this amount do something
while True:
previous = cam.getImage() #grab a frame
time.sleep(0.5) #wait for half a second
current = cam.getImage() #grab another frame
diff = current - previous
matrix = diff.getNumpy()
mean = matrix.mean()
if mean >= threshold:
time.sleep(2)
os.popen("fswebcam -d /dev/video0 -r 352x280
/home/pi/Desktop/image.jpg")
print "Motion Detected"
When motion is detected, it prints "Motion Detection", so that's working, but no image is taken. I tried running fswebcam -d /dev/video0 -r 352x280
/home/pi/Desktop/image.jpg in the terminal and it worked fine. Also, when I ran the code to take an image in python by itself, it also worked, but in the if statement it won't work. I tried running the program from the terminal, and again the motion detection works but I get this error:
Error selecting input 0
VIDEO_S_Input: Device or resource busy
What's the problem here?
It is mentioned in the documentation that the resource will be locked once you have an active instance of Camera.
SimpleCV Camera Class documentation is here:
class Camera(FrameSource):
"""
**SUMMARY**
The Camera class is the class for managing input from a basic camera. Note
that once the camera is initialized, it will be locked from being used
by other processes. You can check manually if you have compatible devices
on linux by looking for /dev/video* devices.
As your resource is locked, it is unavailable or remains busy for fswebcam to access.
Instead as you already have an instance of camera cam, use it to capture image as below:
img = cam.getImage()
img.save("motion.jpg")
Alternative Solution
you can just do del cam.capture before below line:
os.popen("fswebcam -d /dev/video0 -r 352x280 /home/pi/Desktop/image.jpg")
I need to take a single snapshot from webcam. I choice SimpleCV for this task.
Now i try to get a single image and show it:
from SimpleCV import Camera
cam = Camera()
img = cam.getImage()
img.show()
But i see only black image. I think camera is not ready at this moment, because if I call time.sleep(10) before cam.getImage() all works good.
What the right way for this? Thank you!
Have you installed PIL? I had similar problems, but on installing PIL everything works fine. Hope this helps.
You can download PIL from Pythonware
I ran into this same issue and came up with the following work-around. Basically it grabs an image, tests the middle pixel to see if it's black (0,0,0). If it is, then it waits 1 second and tries again.
import time
from SimpleCV import Camera
cam = Camera()
r = g = b = 0
while r + g + b < 0.01:
img = cam.getImage()
r, g, b = img[img.width/2, img.height/2]
print("r: {} g: {} b: {}".format(r,g,b))
time.sleep(1)
img.save(file_name)
Your problem may be that when the script ends the camera object doesn't seem to release the webcam for other programs. When you wait to run the script Windows frees it up (maybe?) so it will work again. If you use it will the old script still thinks it owns the webcam, it shows up as black. Anyway the solution here seems to work. Add "del cam" to the end of your script which will make the camera object go away and let you use the camera again.
try this
from SimpleCV import Camera
cam = Camera(0)
while True:
img = cam.getImage()
img.show()