I have a small OpenCV script that works with my windows laptop camera but doesn't work when I plug in an external Logitech C922 Pro HD stream webcam.
I have changed the numbers around in cap = cv2.VideoCapture(xxxNUMBERxxx) to open the camera with no luck. I have also installed the Logitech drivers with no luck. The rest of the internet seems to think it's a permissions issue, but I can't find any specific permission settings so I'm not sure that's the issue. I just want to be able to get a video stream from the camera.
The script is below:
import numpy as np
import cv2
cap = cv2.VideoCapture(2, cv2.CAP_DSHOW)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Turns out to be a strange and silly solution. After downloading the 'Logi Capture' app, I had to keep the app running with the video streaming through this app whilst opening up the OpenCV app. Seems like a workaround instead of a solution but it works for me.
The solution is to completely uninstall the Logi Capture application.
Related
Could you please tell me, is there a way to get video streaming into Nifi? I am using Nifi "ExecuteProcess" to run python and OpenCV in order to capture the video stream but I am not able to make multiple flow files. My programs run the python code and then put all the captured frames in a single flow file. As I want to do analysis using YOLOV3 on live videos, I want each frame in a different flow file. The code is given below. My program runs the python code using "ExecuteProcess" and when the program finishes it stores everything that is on the output console to a single flow file but I want multiple flow files.
If there is a better way to get live video streaming using Nifi, please share it.
import cv2
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
print(jp)
cap.release()
cv2.destroyAllWindows()
i think the simplest way to make streaming:
1/ setup ListenHTTP processor that will receive video parts through incoming POST requests. and this will become your data ingestion point.
2/ add to your python script http-post method call instead of print like this:
import cv2
import requests
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
# PORT and BASE_PATH must be configured in ListenHTTP processor
requests.post('http://localhost:PORT/BASE_PATH', data = jp)
cap.release()
cv2.destroyAllWindows()
I am opening my v4l2 device and both left and right streams are joined and opened at the same time, is there a way to split left and right sensor image frames and display them simultaneously using gstreamer?
EDIT1:
OK, So, I have a v4l2 device and a stereo camera which has both left and right streams writing to /dev/video0 and using gstreamer,I was able to view both the frames, I would like to know how to split left frame and right frame and display in separate windows. Also, I am trying out this script in opencv too, where I am only getting right video stream, I want to be able to view both streams in separate windows either in opencv or using gstreamer.
The below is openCV one
import os
import v4l2capture
import select
import numpy as np
import cv2
video = cv2.VideoCapture("/dev/video0")
while(True):
ret,frame = video.read()
cv2.imshow('whatever',frame)
key = cv2.waitKey(1) & 0xFF
if(key == ord("q")):
break
video.release()
cv2.destroyAllWindows()
The normal gstreamer application is just using a source and sink
gst-launch-1.0 v4l2src ! xvimagesink
I was able to get the answer to my question here,
https://www.stereolabs.com/docs/opencv/python/
I want to communicate with a software installed in my PC: Optris PI 640 and python code. Do I have to use a virtual com port for serial communication? Or, can I communicate without it too?
it's look that the optris PI 640 is a usb camera, so maybe you can use openCV, here is a little example of capture video from a usb camera:
import numpy as np
import cv2
import imutils
cap = cv2.VideoCapture(0)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('camara',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
print("fin")
you must to have installed openCV and numpy... also can you specify which OS (windows, linux,etc) are you using
I can access and take a picture with the camera using fswebcam. However, if I run the python code below I get: "Camera Not Online". I built opencv-3.3.1 and opencv_contrib-3.3.1 per this tutorial and it works fine for my Raspberry Pi Camera Module just not for the webcam.
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
if cap.isOpened():
print("Webcam online.")
else :
print("Camera Not Online")
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Going crazy. Any help greatly appreciated.
I'm trying to run a program that has been running before. After a while I switched the OS and came back to Ubuntu 14.10 (before it was 14.04). I'm not quite shure if the problem is within openCV or more of a basic thing. I can't find the problem. Maybe someone of you has an idea.
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
It is running to the point that i can see a video capture.
But typing "q" to quit the program. The window that was opened freez, is turning black after a while and nothing else happens. Then I'l have to close the window and force it to exit.
Any idea what the problem is and how to solve?
Some buffers are perhaps used for drawing and so releasing their memory is a bad idea.
So could you try to call destroyAllWindows, before calling cap.release ?
Okay, found a workaround here on stackoverflow. Don't know why I didn't found it earlier.
Well it seems to be a problem within Linux.
DestroyWindow does not close window on Mac using Python and OpenCV
after
cv2.destroyAllWindow()
add
for i in range (1,5):
cv2.waitKey(1)
Don't ask me why but it works. If someone has an answere to this. Please let me know ;o)
Thanks to all who tried to help.