I have tried many attempts to run the cameras connected to my jetson nano. When I run this command:
gst-launch-1.0 -v nvarguscamerasrc ! 'video/x-raw(memory:NVMM),format=NV12,width=1280,height=720,framerate=30/1' ! autovideosink
The camera works but when I try to perform a simple camera read in python with this code
import sys
import cv2
def read_cam():
G_STREAM_TO_SCREEN = "videotestsrc num-buffers=50 ! videoconvert ! appsink"
cap = cv2.VideoCapture(G_STREAM_TO_SCREEN, cv2.CAP_GSTREAMER)
if cap.isOpened():
cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
while True:
ret_val, img = cap.read()
cv2.imshow('demo',img)
cv2.waitKey(1)
else:
print ("Unable to use pipline")
cv2.destroyAllWindows()
if __name__ == '__main__':
read_cam()
the camera does not work in the code above and it returns "Unable to use pipline".
What I am doing wrong and how can I access the camera feed in python?
Related
import cv2
video_capture = cv2.VideoCapture(0)
#video_capture = cv2.VideoCapture('video/ros.mp4')
while(True):
retVal, frame = video_capture.read()
#frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#frame = cv2.resize(frame, (0,0), fx=0.5,fy=0.5)
#cv2.line(frame,(0,0),(511,511),(255,0,0),5)
cv2.imshow("Frame",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
its my code in ubuntu 20.04 but cant open the camera the result is
Faced same issue while running opencv code in ubuntu 20.04 on vmware.
Tried Below:
Open Player->Manage->Virtual Machine Settings -> Usb Controller -> Change the Usb Controller under Connections to USB 3.1 or USB 1.1 . Both resolved my issue. Initially, it was by default set to USB 2.0. Hope it helps.
I have been running python script with nodejs using python-shell package.. and i am getting this error:
Error: init done
at PythonShell.parseError (F:\github\pythonShellDemo\node_modules\python-shell\index.js:191:17)
at terminateIfNeeded (F:\github\pythonShellDemo\node_modules\python-shell\index.js:98:28)
at ChildProcess.<anonymous> (F:\github\pythonShellDemo\node_modules\python-shell\index.js:89:9)
at emitTwo (events.js:126:13)
at ChildProcess.emit (events.js:214:7)
at Process.ChildProcess._handle.onexit (internal/child_process.js:198:12)
after some debugging and research i got to know that this error is with opencv but i cant find any solution..
here is the code:
import cv2
import zbar
from PIL import Image
import sys
video = cv2.VideoCapture(0)
count=0
qrcode=[]
while True:
ret, frame = video.read()
cv2.imshow('Camera', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
grayscale = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
image = Image.fromarray(grayscale)
width, height = image.size
zbarimage = zbar.Image(width, height,'Y800', image.tobytes())
scanner = zbar.ImageScanner()
scanner.scan(zbarimage)
for x in zbarimage:
if count == 0:
qrcode=x.data
count=count+1
if qrcode:
break
video.release()
cv2.destroyAllWindows()
print(qrcode)
sys.stdout.flush()
I am using python 2.7
UPDATE:
nodejs code for calling python script:
PythonShell.run('python/scan.py', options, function (err, results) {
if (err) {
console.log(err)
reject(err)
}
// results is an array consisting of messages collected during execution
console.log(results)
resolve(results)
})
UPDATE:
i tried running opencv only without zbar.. still got the error
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The OpenCV library prints init done to the standard error output. It's not an error, but just a debug print. python-shell then turns this into an error. From the python-shell documentation:
If the script writes to stderr or exits with a non-zero code, an error will be thrown.
Suppressing the output seems to be only possible by recompiling the library with a parameter set.
I am using OpenCV to open and read from several webcams. It all works fine, but I cannot seem to find a way to know if a camera is available.
I tried this code (cam 2 does not exist):
import cv2
try:
c = cv2.VideoCapture(2)
except:
print "Cam 2 is invalid."
But this just prints a lot of errors:
VIDEOIO ERROR: V4L: index 2 is not correct!
failed to open /usr/lib64/dri/hybrid_drv_video.so
Failed to wrapper hybrid_drv_video.so
failed to open /usr/lib64/dri/hybrid_drv_video.so
Failed to wrapper hybrid_drv_video.so
GStreamer Plugin: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /builddir/build/BUILD/opencv-3.2.0/modules/videoio/src/cap_gstreamer.cpp, line 832
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:
/builddir/build/BUILD/opencv-3.2.0/modules/videoio/src/cap_gstreamer.cpp:832: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
OpenCV Error: Unspecified error (unicap: failed to get info for device
) in CvCapture_Unicap::initDevice, file /builddir/build/BUILD/opencv-3.2.0/modules/videoio/src/cap_unicap.cpp, line 139
VIDEOIO(cvCreateCameraCapture_Unicap(index)): raised OpenCV exception:
/builddir/build/BUILD/opencv-3.2.0/modules/videoio/src/cap_unicap.cpp:139: error: (-2) unicap: failed to get info for device
in function CvCapture_Unicap::initDevice
CvCapture_OpenNI::CvCapture_OpenNI : Failed to enumerate production trees: Can't create any node of the requested type!
<VideoCapture 0x7fa5b5de0450>
No exception is thrown. When using c.read() later, I do get False, but I would like to do this in the initialisation phase of my program.
So, how do I find out how many valid cameras I have or check if a certain number 'maps' to a valid one?
Using cv2.VideoCapture( invalid device number ) does not throw exceptions. It constructs a <VideoCapture object> containing an invalid device - if you use it you get exceptions.
Test the constructed object for None and not isOpened() to weed out invalid ones.
For me this works (1 laptop camera device):
import cv2 as cv
def testDevice(source):
cap = cv.VideoCapture(source)
if cap is None or not cap.isOpened():
print('Warning: unable to open video source: ', source)
testDevice(0) # no printout
testDevice(1) # prints message
Output with 1:
Warning: unable to open video source: 1
Example from: https://github.com/opencv/opencv_contrib/blob/master/samples/python2/video.py
lines 159ff
cap = cv.VideoCapture(source)
if 'size' in params:
w, h = map(int, params['size'].split('x'))
cap.set(cv.CAP_PROP_FRAME_WIDTH, w)
cap.set(cv.CAP_PROP_FRAME_HEIGHT, h)
if cap is None or not cap.isOpened():
print 'Warning: unable to open video source: ', source
Another solution, which is available in Linux is to use the /dev/videoX device in the VideoCapture() call. The devices are there when the cam is plugged in. Together with glob(), it is trivial to get all the cameras:
import cv2, glob
for camera in glob.glob("/dev/video?"):
c = cv2.VideoCapture(camera)
Of course a check is needed on c using isOpened(), but you are sure you only scan the available cameras.
Here is a "NOT working" solution to help you prevent tumbling over in this pitfall:
import cv2 as cv
import PySpin
print (cv.__version__)
# provided by Patrick Artner as solution to be working for other cameras than
# those of Point Grey (FLIR).
def testDevice(source):
cap = cv.VideoCapture(source)
if cap is None or not cap.isOpened():
print('Warning: unable to open video source: ', source)
# ... PySpin / Spinnaker (wrapper/SDK libary) ...
system = PySpin.System.GetInstance()
cam_list = system.GetCameras()
cam = ''
cam_num = 0
for ID, cam in enumerate(cam_list):
# Retrieve TL device nodemap
if ID == cam_num:
print ('Got cam')
cam = cam
cam.Init()
# ... CV2 again ...
for i in range(10):
testDevice(i) # no printout
You can try this code:
from __future__ import print_function
import numpy as np
import cv2
# detect all connected webcams
valid_cams = []
for i in range(8):
cap = cv2.VideoCapture(i)
if cap is None or not cap.isOpened():
print('Warning: unable to open video source: ', i)
else:
valid_cams.append(i)
caps = []
for webcam in valid_cams:
caps.append(cv2.VideoCapture(webcam))
while True:
# Capture frame-by-frame
for webcam in valid_cams:
ret, frame = caps[webcam].read()
# Display the resulting frame
cv2.imshow('webcam'+str(webcam), frame)
k = cv2.waitKey(1)
if k == ord('q') or k == 27:
break
# When everything done, release the capture
for cap in caps:
cap.release()
cv2.destroyAllWindows()
I want to capture images from webcam and then further do image processing for ANPR (Automatic number plate Recognition) in python 2.7 using opencv 2.4.10 in Ubuntu 14.04. When I run this simple code, it detects my camera once and then camera stops working.
Code is:
import cv2
cam = cv2.VideoCapture(0)
s, img = cam.read()
winName = "Movement Indicator"
cv2.namedWindow(winName, cv2.CV_WINDOW_AUTOSIZE)
while s:
cv2.imshow( winName,img )
s, img = cam.read()
key = cv2.waitKey(10) & 0xFF
if key == 27:
cv2.destroyWindow(winName)
break
print "Goodbye"
Can someone please help me with this?
Got the answer. I was not releasing cam. It works fine now
I am using ffserver and ffmpeg combination to capture web camera video and transmit it through my network.
I want to capture this video using opencv and python from another computer.
I can see the video (cam1.asf) in the browser of another computer. But my opencv + python code could not capture any frame.
Code for ffserver
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandWidth 2000
<Feed feed1.ffm>
File ./tmp/feed1.ffm
FileMaxSize 1G
ACL allow 127.0.0.1
</Feed>
<Stream cam1.asf>
Feed feed1.ffm
Format asf
VideoCodec msmpeg4v2
VideoFrameRate 30
VideoSize vga
</Stream>
FFmpeg
$ffmpeg -f video4linux2 -i /dev/video0 192.168.1.3 /cam1.ffm
This stream can be seen in the browser
But with opencv code
import sys
import cv2.cv as cv
import numpy
video="http://http://192.168.1.3:8090/cam1.asf"
capture =cv.CaptureFromFile(video)
cv.NamedWindow('Video Stream', 1 )
while True:
# capture the current frame
frame = cv.QueryFrame(capture)
#if frame is None:
# break
#else:
#detect(frame)
cv.ShowImage('Video Stream', frame)
if cv.WaitKey(10) == 27:
print 'ESC pressed. Exiting ...'
break
I donot get any output in the stream
My aim is to work with the web camera video both at the base station (ie where the web camera is connected) and also at the network location