I am trying to use Arduino Uno to take a snapshot photo with a webcam. I am using python OpenCV to interface with the camera to capture the video. I also am using pyserial for the Arduino interface so that when a pushbutton is pressed, the Arduino and python will interact and the camera will take the photo. The problem that I am experiencing is that when I am running my serial connection for Arduino, the webcam window does not load a video, e.i. (Not Responding). When I comment out the lines of code pertaining to the serial interface, the camera window frame loads and I am able to view the video. I'm thinking that since both are devices are connected to my computer via USB, PySerial is taking over the serial interface and not allowing the camera video feed to load its data onto the interface for me to view. My question is there a way to interface the two together so that when the Arduino receives a digital input, it will send a "command" to python to make the webcam take a photo? Any suggestion will greatly be appreciated.
Platform:
Windows 10
Python 3.8
Python Code:
import cv2
import serial
cam = cv2.VideoCapture(0)
ser = serial.Serial('COM7', 9600)
cv2.namedWindow("Object")
img_counter = 0
while True:
ret, frame = cam.read()
snap = ser.read()
ser.reset_input_buffer()
if not ret:
print("failed to grab frame")
break
cv2.imshow("Object", frame)
k = cv2.waitKey(1)
if k%256 == 27:
# ESC pressed
print("Program closing...")
break
elif str(snap) == '1':
# Button pressed
img_name = "opencv_frame_{}.png".format(img_counter)
cv2.imwrite(img_name, frame)
print("{} written!".format(img_name))
img_counter += 1
cam.release()
cv2.destroyAllWindows()
P.S. Please forgive if this issue seems basic, I am a bit of a novice to this scope of work. I am creating this school project to interface a neural network and a PLC. I also have a Rasberry Pi that I can possibly use to run the Neural Network but that is another area with a learning curve for me lol.
Enter it the serial connection timeout.
ser=serial.Serial(port='comX',baudrate=9600,timeout=.1) #timeout - float type
I think no use read(), rather readline(). This data (if intiger) can you convert.
#write this in while
try:
dat=ser.readline()
try:
datInt=int(dat)
except:
print('Convert fail')#often readed b'', and may not converted
except:
print('Serial connecting fail')
This datInt value is the writed intiger.
If not intiger, can you string method, and convert with more command.
If you will list or more data read, write here a message.
Good luck!
(sory my english is not perfect)
Related
Could you please tell me, is there a way to get video streaming into Nifi? I am using Nifi "ExecuteProcess" to run python and OpenCV in order to capture the video stream but I am not able to make multiple flow files. My programs run the python code and then put all the captured frames in a single flow file. As I want to do analysis using YOLOV3 on live videos, I want each frame in a different flow file. The code is given below. My program runs the python code using "ExecuteProcess" and when the program finishes it stores everything that is on the output console to a single flow file but I want multiple flow files.
If there is a better way to get live video streaming using Nifi, please share it.
import cv2
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
print(jp)
cap.release()
cv2.destroyAllWindows()
i think the simplest way to make streaming:
1/ setup ListenHTTP processor that will receive video parts through incoming POST requests. and this will become your data ingestion point.
2/ add to your python script http-post method call instead of print like this:
import cv2
import requests
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
# PORT and BASE_PATH must be configured in ListenHTTP processor
requests.post('http://localhost:PORT/BASE_PATH', data = jp)
cap.release()
cv2.destroyAllWindows()
I am opening my v4l2 device and both left and right streams are joined and opened at the same time, is there a way to split left and right sensor image frames and display them simultaneously using gstreamer?
EDIT1:
OK, So, I have a v4l2 device and a stereo camera which has both left and right streams writing to /dev/video0 and using gstreamer,I was able to view both the frames, I would like to know how to split left frame and right frame and display in separate windows. Also, I am trying out this script in opencv too, where I am only getting right video stream, I want to be able to view both streams in separate windows either in opencv or using gstreamer.
The below is openCV one
import os
import v4l2capture
import select
import numpy as np
import cv2
video = cv2.VideoCapture("/dev/video0")
while(True):
ret,frame = video.read()
cv2.imshow('whatever',frame)
key = cv2.waitKey(1) & 0xFF
if(key == ord("q")):
break
video.release()
cv2.destroyAllWindows()
The normal gstreamer application is just using a source and sink
gst-launch-1.0 v4l2src ! xvimagesink
I was able to get the answer to my question here,
https://www.stereolabs.com/docs/opencv/python/
I have the basic python program running on my raspberry pi. It works with no issues. But I was trying to log in to the pi thru another computer like SSH and run the same python program it gives me errors like the video isn't working. What am I doing wrong. Do I need to force the RPI to use its monitor to get this work or what? Any suggestions?
I get this error:
Unable to init server: Could not connect: Connection refused
(Video:1363): Gtk-WARNING **: cannot open display:
This is my program:
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
export DISPLAY=:0 and then run your code.
If you run it without above code, it will run it in the current shell. In simple words, when you export display, you are connecting to a current instance of running raspberry session, and it will act like you are running it locally.
NOTE In some cases, you need to export DISPLAY=:0.0. Just google it for your os.
Context:
I have been playing around with python's wrapper for opencv2.
I wanted to play with a few ideas and use a wide angle camera similar to 'rear view' cameras in cars.
I got one from a scrapped crash car (its got 4 wires) I took an educated guess from the wires color codding, connect it up so that I power the power and ground line from a usb type A and feed the NTSC composite+ composite- from an RCA connector.
I bought a NTSC to usb converter like this one.
It came with drivers and some off the shelf VHStoDVD software.
the problem:
I used the run of the mill examples online to trial test it like this:
import numpy as np
import cv2
cam_index=0
cap=cv2.VideoCapture(cam_index)
print cap.isOpened()
ret, frame=cap.read()
#print frame.shape[0]
#print frame.shape[1]
while (cap.isOpened()):
ret, frame=cap.read()
#gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#release and close
cap.release()
cv2.destroyAllWindows()
this is the output from shell:
True
Traceback (most recent call last):
File "C:/../cam_capture_.py", line 19, in <module>
cv2.imshow('frame', frame)
error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow
>>>
key Observations:
SCREENSHOTS
in control panel the usb dongle is shown as 'OEM capture' in Sound Video & Game controllers . So it's not seen as a simple plug and play Webcam in 'Imaging devices'
If I open the VHStoDVD software I need to configure 2 aspects:
set as Composite
set enconding as NTSC
then the camera feed from the analog camera is shown OK within the VHStoDVD application
When I open the device video channel in FLV (device capture). The device stream is just a black screen but IF i open the VHStoDVD software WHILE flv is streaming I get the camera's feed to stream on FLV and a black screen is shown on the VHStoDVD feed. Another important difference is that there is huge latency of aprox 0.5sec when the feed is in FLV as opposed to running in VHStoDVD.
When running "cam_capture.py" as per the sample code above at some put during runtime i will eventually get a stop error code 0x0000008e:
detail:
stop: 0x0000008E (0xC0000005, 0xB8B5F417, 0X9DC979F4, 0X00000000 )
ks.sys - Address B8B5F417 base at B8B5900, Datestamp...
beg mem dump
phy mem dump complete
5.if i try to print frame.shape[0] or frame.shape[1] I get a type error say I cannot print type None
6.if try other cam_index the result is always false
TLDR:
In 'control panel' the camera device is under 'sound video & game controllers' not under 'imaging devices';
The cam_index==zero;
The capture.isOpened()=True;
The frame size is None;
If VHStoDVD is running with composite NTSC configured the camera works , obviously you cant see the image with printscreen in attachment but trust me ! ;)
Is there any form of initialisation of the start of communication with the dongle that could fix this i.e. emulate VHStoDVD settings (composite+NTSC)? I thought I could buspirate the start of comms between VHStoDVD and the dongle but it feels like I am going above and beyond to do something I thought was a key turn solution.
Any constructive insights, suggestion , corrections are most welcome!
Thanks
Cheers
Ok , so after deeper investigation the initial suspicion was confirmed i.e. because the NTSC dongle is not handled as an imaging device (it's seen as a Video Controller , so similar to an emulation of a TV Tuner card ) it means that although we are able to call cv2.VideoCapture with cam_index=0 the video channel itself is not transmitting because we are required to define a bunch of parameters
encoding
frame size
fps rate etc
The problem is because the device is not supported as an imaging device calling cv2.VideoCapture.set(parameter, value) doesn't seem to change anything on the original video feed.
I didn't find a solution but I found a work around. There seems to be quite a few options online. Search for keywords DV to webcam or camcorder as a webcam.
I used DVdriver (http://www.trackerpod.com/TCamWeb/download.htm) (i used the trial because I am cheap!).
Why does it work?
As much as I can tell DVdriver receives the data from the device which is set as a Video controller (similar to a capture from "Windows Movie Maker" or ffmpeg) and then through "fairydust" outputs the frames on cam_index=0 (assumed no other cam connected) as an 'imaging device' webcam.
Summary
TLDR use DVdriver or similar.
I found a workaround but I would really like to understand it from first principles and possible generate a similar initialisation of the NTSC dongle from within python, without any other software dependencies but until then, hopefully this will help others who were also struggling or assuming it was a hardware issue.
I will now leave you with some Beckett:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. (!)
It's a few months late, but might be useful. I was working on a Windows computer and had installed the drivers that came with the device, I tried the same code as your question with an Ezcap from Somagic and got the same error. Since "frame is None," I decided to try an if statement around it - in case it was an initialization error. Placing into the loop:
if frame is None:
print 0
else:
print 1
The result is: 01110111111111111111111111111...
And if the frame = cap.read(), above the loop is commented out - I get: 00111111111111111...
So for my device capture device it appears to be working for all frames beyond the 5th are captured. I'm not sure why this is, but it might be a useful work around for now.
Disclaimer: Unfortunately, my camera input is currently in a radiation field so I can't get to it for a couple of weeks to make sure it works for sure. However, the images are currently a black frame (which is expected without proper input).
I faced the same issue. As a workaround, I first tried the solution proposed by #user3380927 and it worked indeed. But since I didn't want to rely on an external software, I started tweaking parameters using opencv in Python.
This lines of code worked like a charm (you have to insert them before reading the frame for the first time):
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
So, the full code for basic camera reading is as follows:
import cv2
cam = cv2.VideoCapture(1)
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
while(True):
ret,frame = cam.read()
cv2.imshow('frame',frame)
if (cv2.waitKey(1) & 0xFF == ord('q')):
break
cam.release()
cv2.destroyAllWindows()
You can then apply image processing operations as usual. Just for reference, this was my configuration:
Opencv 3.1.0
Python 2.7.5
Windows 8.1
Elgato Video Capture device (this was also shown as Sound Video & Game controllers)
I'm having problems to get a video stream from a IP camera I have. I'm using opencv to get the images from it. Here's the code i have:
import sys
import cv
video="http://prot-on.dyndns.org:8080/video2.mjpeg"
capture =cv.CaptureFromFile(video)
cv.NamedWindow('Video Stream', 1 )
while True:
# capture the current frame
frame = cv.QueryFrame(capture)
if frame is None:
break
else:
#detect(frame)
cv.ShowImage('Video Stream', frame)
if k == 0x1b: # ESC
print 'ESC pressed. Exiting ...'
break
Actually, this thing works, but it takes too much time to display the images. I'm guessing it's because of this error from ffmpeg.
[mjpeg # 0x8cd0940]max_analyze_duration reached
[mjpeg # 0x8cd0940]Estimating duration from bitrate, this may be inaccurate
I'm not a python expert so any help would be appreciated!
First, mjpeg is a relatively simple video format. If you read your IP camera's manual, it's very like that you can find how to display the video in browser with a bit JavaScript code. In fact, if you open link of http://prot-on.dyndns.org:8080/video2.mjpeg in Google Chrome, you would see the video without any problem. (Maybe you shouldn't leave the real URL of your camera)
Second, as far as I can see, the frame rate of your camera is pretty slow. That might due to Internet latency or your camera's setting. Compare what you see in Chrome with the video displayed by your code, if they are of same quality, then it's not your code's problem.