I am trying to get input from my webcam using OpenCv and send it to a virtual camera using pyvirtualcam. For some reason when my webcam is displayed it gives it a blue filter. When i display my webcam without sending it to virtual camera there is no filter and everything works well.
import pyvirtualcam
import cv2
cap = cv2.VideoCapture(0)
with pyvirtualcam.Camera(width=1280, height=720, fps=20) as cam:
while True:
ret_val, frame = cap.read()
frame = cv2.resize(frame, (1280, 720), interpolation=cv2.BORDER_DEFAULT)
# cv2.imshow('my webcam', frame)
cam.send(frame)
cam.sleep_until_next_frame()
if cv2.waitKey(1) == 27:
break # esc to quit
cv2.destroyAllWindows()
OpenCV uses BGR as pixel format. pyvirtualcam expects RGB by default, but it supports BGR as well:
fmt = pyvirtualcam.PixelFormat.BGR
with pyvirtualcam.Camera(width=1280, height=720, fps=20, fmt=fmt) as cam:
Related
I am working with opencv v.'4.4.0' in python 3.7 and whenever I get images from an external USB camera, this ones are oversaturated. How can I control the adjust the brighness parameter for the img capture?
The camera is an external USB camera from Microsoft 1080p HD Sensor.
Below the code and img sample.
import cv2
import numpy
def get_img_camera(): #return frame (img)
cam = cv2.VideoCapture(0) # 1 laptop camera, 0 external camera
cam.set(3,1280)
cam.set(4,720)
cv2.namedWindow("Plates")
while True:
ret, frame = cam.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
scale = 1.0 # opacity out of 100%
frame_darker = (frame * scale).astype(numpy.uint8)
#cam = frame_darker
if not ret:
print("failed to grab frame")
break
#cv2.imshow("Image", frame)
#k = cv2.waitKey(0)
img_name = "img_from_camera.jpg"
cv2.imwrite(img_name, frame_darker)
print("{} written!".format(img_name))
break
cam.release()
cv2.destroyAllWindows()
return frame
get_img_camera()
View Img capture: oversaturated
Thank you in advance!
I think you can try the ApiPreference which preferred Capture API backends to use. Can be used to enforce a specific reader implementation if multiple are available.(https://docs.opencv.org/3.4/d4/d15/group__videoio__flags__base.html)
#capture from camera at location 0
cap = cv2.VideoCapture(0,cv2.CAP_DSHOW)
#Brightness (0-100)
cap.set(10,100)
#Sturation (0-100)
cap.set(12,100)
Those functions are work to me, worth to try. And make sure your python and opencv version are not too old.
Im trying to create a tool that will take images from stereo cameras (connected to a sync board) with python and opencv.
When im looking at the image i get from opencv it seems different then what i get with windows camera app.
both set to the same resolution. what am i missing?
cap = cv2.VideoCapture(0)
# cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
#
# cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while 1:
ret, frame = cap.read()
# cv2.imshow("frame",frame)
cv2.imwrite('test.bmp',frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
print('size:', width, height)
both images are 1344x376
Hi I would like to run this code to detect cars using raspicam on a raspberry pi B with OpenCV but encountered errors.
import numpy as np
import cv2
car_cascade = cv2.CascadeClassifier('cars3.xml')
cap = cv2.VideoCapture(0)
while 1:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
After running the code it returns
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /home/pi/installopencv/opencv-3.1.0/modules/imgproc/src/color.cpp, line 8000
Traceback (most recent call last):
File "test.py", line 14, in <module>
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.error: /home/pi/installopencv/opencv-3.1.0/modules/imgproc/src/color.cpp:8000: error: (-215) scn == 3 || scn == 4 in function cvtColor
Is the error happening because I'm using raspicam and "cap = cv2.VideoCapture(0)" only work for webcam? I trying enabling V4L2 module but it didn't work as well
If you want to use the Raspberry PI camera module, use the picamera module to get the frames, not OpenCV'2 videoCapture module. In particular you want to install module with array support:
pip install "picamera[array]"
This will allow you to easily pass the frames to OpenCV.
There's a very good tutorial on how start from scratch here
and here is the gist of it:
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
In your case, you may want to change the format from "rgb" to "yuv".
This way, you can extract the y(luminosity) channel directly which will be your grayscale method. Hopefully you'll gain a small boost in speed not having to do the colourspace conversion (from BGR to grayscale) and fetching the frames from CSI (instead of USB).
i've been trying to display the camera feed from my laptops web cam in grayscale and i've done it using the following code:
import cv2
import numpy as np
clicked = False
def onMouse(event, x, y, flags, param):
global clicked
if event == cv2.cv.CV_EVENT_LBUTTONUP:
clicked = True
cv2.namedWindow('image capture', cv2.WINDOW_NORMAL)
cv2.setMouseCallback('image capture', onMouse)
#initialize the camera object with VideoCapture
camera = cv2.VideoCapture(0)
sucess, frame = camera.read()
cv2.imwrite('snapshot.png', frame)
gray = cv2.imread('snapshot.png', cv2.IMREAD_GRAYSCALE)
while sucess and cv2.waitKey(1) == -1 and not clicked:
cv2.imwrite('snapshot.png', frame)
gray = cv2.imread('snapshot.png', cv2.IMREAD_GRAYSCALE)
cv2.imshow('image capture', gray)
sucess, frame = camera.read()
cv2.imwrite('snapshot.png', frame)
print 'photo taken press any key to exit'
cv2.waitKey()
cv2.destroyAllWindows()
Here what i've done is saved the frame in 'snapshot.png' and again reloaded it in grayscale and display that grayscale image. Is there any method to directly read the camera frame in grayscale rather than going through all this mess. Thanks in advance.
wow, what a mess ;)
you simply want:
gray = cv2.cvtColor( img, cv2.COLOR_BGR2GRAY )
In the latest version of opencv, the cvtColor expects it's scr to be not None and therefore gives 215-assertion error.
This is basically like a scenario where you have to use a catch block and try to handle exceptions.
Code to overcome this problem:
while True:
ret, frame = cap.read()
if frame.any():
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', gray)
I am trying to superimpose an image over a camera feed in python. I can get an image to superimpose over another image, but when I apply the same thing to my camera feed it doesn't work. Here's my code so far:
#!/usr/bin/python
import cv2
import time
cv2.cv.NamedWindow("Hawk Eye", 1)
capture = cv2.cv.CaptureFromCAM(0)
cv2.cv.SetCaptureProperty(capture, cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 800)
cv2.cv.SetCaptureProperty(capture, cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 600)
x_offset=y_offset=50
arrows = cv2.imread("arrows.png")
while True:
webcam=cv2.cv.QueryFrame(capture)
#webcam[y_offset:y_offset+arrows.shape[0], x_offset:x_offset+arrows.shape[1]]=arrows
cv2.cv.ShowImage("Hawk Eye", webcam)
if cv2.cv.WaitKey(10) == 27:
break
cv2.cv.DestroyAllWindows()
If I uncomment:
img[y_offset:y_offset+arrows.shape[0], x_offset:x_offset+arrows.shape[1]]=arrows
the line that imposes the image, it shows just the camera feed, but when I add it in my loop it stops working. Thanks!
This works OK using the cv2 API:
import cv2
import time
cv2.namedWindow("Hawk Eye", 1)
capture = cv2.VideoCapture(0)
capture.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 800)
capture.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 600)
x_offset=y_offset=50
arrows = cv2.imread("hawk.png")
while True:
ret, webcam = capture.read()
if ret:
webcam[y_offset:y_offset+arrows.shape[0], x_offset:x_offset+arrows.shape[1]]=arrows
cv2.imshow("Hawk Eye", webcam)
if cv2.waitKey(10) == 27:
break
cv2.destroyAllWindows()