I have the code:
import cv2
import matplotlib.pyplot as plt
import cvlib as cv
from cvlib.object_detection import draw_bbox
im = cv2.imread(r'C:\Users\james\OneDrive\Desktop\logos\normal.png')
bbox, label, conf = cv.detect_common_objects(im)
output_image = draw_bbox(im, bbox, label, conf)
plt.imshow(output_image)
plt.show()
That detects most objects in the picture, but I want to be able to make it from live video, I tried using cap = cv2.VideoCapture(0) but I could not get it to work.
A typical program executes every line until it's done, and then exits. Your code asks it to open a stream and show a frame. Then, because it's done everything that you've asked it to do, it exits.
In the code you provided in the comments, you are using matplotlib's .clf() immediately after .show(), which will clear the figure and potentially is the cause of your issue. You should be using cv2.imshow() instead.
If you want it to remain open, you need to put your displaying code inside a while True loop as shown in the getting started part of the docs. Combining their code with yours, you would get something like (however I haven't tested it):
import numpy as np
import cv2
import cvlib as cv
from cvlib.object_detection import draw_bbox
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Detect objects and draw on screen
bbox, label, conf = cv.detect_common_objects(frame)
output_image = draw_bbox(im, bbox, label, conf)
cv2.imshow('output',output_image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
For a webcam, this will keep going until you press q to interrupt the program.
Be warned, this may not run smoothly because object detection models are typically fairly computationally expensive. But that is a challenge for another question.
Related
import cv2
cap= cv2.VideoCapture(0)
while True:
ret,frame= cap.read()
cv2.imshow('Our live sketch',frame)
if cv2.waitKey(1)==13:
break
cap.release()
When I use cv2.VideoCapture(1), the programs shows error but the program works properly if I use cv2.VideoCapture(0)
That's the index of the camera it is used to select different cameras if you have more than one attached. By default 0 is your main one.
Does anyone know a way to create a camera source using python? So for example I have 2 cameras:
Webcam
USB Cam
Whenever I use any web application or interface which requires camera, I have an option to select a camera source from the above two mentioned.
What I want to achieve is that I am processing real time frames in my python program with a camera portal of my own like this:
import numpy as np
import cv2
while True:
_,frame = self.cam.read()
k = cv2.waitKey(1)
if k & 0xFF == ord('q'):
self.cam.release()
cv2.destroyAllWindows()
break
else:
cv2.imshow("Frame",frame)
Now I want to use this frame as a camera source so next time whenever I open a software or web app which requires camera then it show option as follows:
Webcam
USB Cam
Python Cam (Or whatever be the name)
Does anyone has any advice or hint on how to go about that? I have seen some premium softwares which generate their own camera source but they're written in c++. I was wondering if this could happen in python or not.
Here is an example of the same:
As you can see there are multiple camera source there. I want to add one camera source which displays the frames processed by python in its feed.
You can use v4l2loopback (https://github.com/umlaeute/v4l2loopback) for that. It is written in C but there are some python wrappers. I've used virtualvideo (https://github.com/Flashs/virtualvideo).
The virtualvideo README is pretty straight forward but here is my modification of their github example to match your goal:
import virtualvideo
import cv2
class MyVideoSource(virtualvideo.VideoSource):
def __init__(self):
self.cam = cv2.VideoCapture(0)
_, img = self.cam.read()
size = img.shape
#opencv's shape is y,x,channels
self._size = (size[1],size[0])
def img_size(self):
return self._size
def fps(self):
return 30
def generator(self):
while True:
_, img = self.cam.read()
yield img
vidsrc = MyVideoSource()
fvd = virtualvideo.FakeVideoDevice()
fvd.init_input(vidsrc)
fvd.init_output(2, 640, 480, pix_fmt='yuyv422')
fvd.run()
in init_output the first argument is the virtual camera resource thar i've created when adding the kernel module:
sudo modprobe v4l2loopback video_nr=2 exclusive_caps=1
the last arguments are my webcam size and the pixel format.
You should see this option in hangouts:
I am using OpenCV in python and the aim of the code is to display one image after another in the same window without destroying it. For eg. i1.jpg is displayed and then in the same window left arrow is displayed after a pause of 5s.
In my case, i1.jpg is displayed and then the window is destroyed and then i2.jpg is displayed in another window.
Here is the code for the same -
t_screen_time = 5000
import numpy
import cv2
left_arrow = "left_arrow.jpg"
right_arrow = "right_arrow.jpg"
img = cv2.imread(left_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
cv2.destroyAllWindows()
img = cv2.imread(right_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
cv2.destroyAllWindows()
The argument in cv2.waitKey() is the time to wait for a keypress and not exactly the time the image is displayed for.
cv::imshow() or cv2.imshow() was not designed to be a complete GUI. It was designed to let you quickly debug and display images/videos.
You can achieve what you need using some form of sleep() in a thread or so. But this is harder to do in python. You could also just edit your code to
t_screen_time = 5000
import numpy
import cv2
left_arrow = r"first/image"
right_arrow = r"second/image"
img = cv2.imread(left_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
img = cv2.imread(right_arrow)
cv2.imshow('image',img)
cv2.waitKey(t_screen_time)
cv2.destroyAllWindows()
by removing the cv2.destroyAllWindows() before displaying the second image. I've tested this code and it works, but if you're looking for more GUI functionality, use something like Qt.
I'm running OpenCV 3 on Python 3.4 on a 2015 15" MacBook Pro. Below is a minimal example that illustrates my problem:
import cv2 as cv
import numpy as np
def mouse_callback(event, x, y, flags, param):
print("Callback!")
cv.namedWindow("Display")
cv.setMouseCallback("Display", mouse_callback)
cap = cv.VideoCapture(0)
while True:
ret, frame = cap.read()
cv.imshow("Display", frame)
if cv.waitKey(1) == ord("q"):
break
When I click on the screen, the text "Callback!" takes about 3 seconds to appear on the terminal screen. I'm not sure why I'm seeing this much lag--my laptop shouldn't be so bad that I can't even run this simple script.
Additionally, the problem persists when I reduce the webcam resolution, or even when I replace the webcam altogether with a still image. I rewrote a similar program in C++, and the C++ OpenCV libraries also suffer from this lag.
Any tips on how I can reduce or eliminate the lag?
Try this :
import cv2 as cv
import numpy as np
def mouse_callback(event, x, y, flags, param):
if event == cv.EVENT_LBUTTONDOWN:
print("Callback!")
cv.namedWindow("Display")
cv.setMouseCallback("Display", mouse_callback)
cap = cv.VideoCapture(0)
while True:
ret, frame = cap.read()
cv.imshow("Display", frame)
if cv.waitKey(1) == ord("q"):
break
Although its mostly a problem with something else. Maybe your webcam resolution is too high.
So try this also:
cap.set(3,640)
cap.set(4,480)
Put the above code above the while loop and check.
I do not understand this code taken from OpenCV documentation:
import cv2
import numpy as np
# mouse callback function
def draw_circle(event,x,y,flags,param):
if event == cv2.EVENT_LBUTTONDBLCLK:
cv2.circle(img,(x,y),100,(255,0,0),-1)
# Create a black image, a window and bind the function to window
img = np.zeros((512,512,3), np.uint8)
cv2.namedWindow('image')
cv2.setMouseCallback('image',draw_circle)
while(1):
cv2.imshow('image',img)
Specifically, I do not understand why the parameters of the draw_circle() function are that way but never used later. Can someone explain to me what's behind that, please?
Bellow my code is working fine just check it out..
import cv2
import numpy as np
img1=cv2.imread('Image/Desert.jpg')
def draw(event,x,y,flage,param):
if event==cv2.EVENT_LBUTTONDBLCLK:
cv2.circle(img1,(x,y),4,(0,0,0),4)
cv2.namedWindow('Image')
cv2.setMouseCallback('Image',draw)
while(1):
cv2.imshow('Image',img1)
if cv2.waitKey(20) & 0xFF ==27:
break
cv2.destroyAllWindows()