Opencv works with raspberry pi locally but not thru network - python

I have the basic python program running on my raspberry pi. It works with no issues. But I was trying to log in to the pi thru another computer like SSH and run the same python program it gives me errors like the video isn't working. What am I doing wrong. Do I need to force the RPI to use its monitor to get this work or what? Any suggestions?
I get this error:
Unable to init server: Could not connect: Connection refused
(Video:1363): Gtk-WARNING **: cannot open display:
This is my program:
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break

export DISPLAY=:0 and then run your code.
If you run it without above code, it will run it in the current shell. In simple words, when you export display, you are connecting to a current instance of running raspberry session, and it will act like you are running it locally.
NOTE In some cases, you need to export DISPLAY=:0.0. Just google it for your os.

Related

python opencv blocked image icon showing when reading image

I have a small OpenCV script that works with my windows laptop camera but doesn't work when I plug in an external Logitech C922 Pro HD stream webcam.
I have changed the numbers around in cap = cv2.VideoCapture(xxxNUMBERxxx) to open the camera with no luck. I have also installed the Logitech drivers with no luck. The rest of the internet seems to think it's a permissions issue, but I can't find any specific permission settings so I'm not sure that's the issue. I just want to be able to get a video stream from the camera.
The script is below:
import numpy as np
import cv2
cap = cv2.VideoCapture(2, cv2.CAP_DSHOW)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Turns out to be a strange and silly solution. After downloading the 'Logi Capture' app, I had to keep the app running with the video streaming through this app whilst opening up the OpenCV app. Seems like a workaround instead of a solution but it works for me.
The solution is to completely uninstall the Logi Capture application.

How to run OpenCV and Arduino PySerial simultaneously

I am trying to use Arduino Uno to take a snapshot photo with a webcam. I am using python OpenCV to interface with the camera to capture the video. I also am using pyserial for the Arduino interface so that when a pushbutton is pressed, the Arduino and python will interact and the camera will take the photo. The problem that I am experiencing is that when I am running my serial connection for Arduino, the webcam window does not load a video, e.i. (Not Responding). When I comment out the lines of code pertaining to the serial interface, the camera window frame loads and I am able to view the video. I'm thinking that since both are devices are connected to my computer via USB, PySerial is taking over the serial interface and not allowing the camera video feed to load its data onto the interface for me to view. My question is there a way to interface the two together so that when the Arduino receives a digital input, it will send a "command" to python to make the webcam take a photo? Any suggestion will greatly be appreciated.
Platform:
Windows 10
Python 3.8
Python Code:
import cv2
import serial
cam = cv2.VideoCapture(0)
ser = serial.Serial('COM7', 9600)
cv2.namedWindow("Object")
img_counter = 0
while True:
ret, frame = cam.read()
snap = ser.read()
ser.reset_input_buffer()
if not ret:
print("failed to grab frame")
break
cv2.imshow("Object", frame)
k = cv2.waitKey(1)
if k%256 == 27:
# ESC pressed
print("Program closing...")
break
elif str(snap) == '1':
# Button pressed
img_name = "opencv_frame_{}.png".format(img_counter)
cv2.imwrite(img_name, frame)
print("{} written!".format(img_name))
img_counter += 1
cam.release()
cv2.destroyAllWindows()
P.S. Please forgive if this issue seems basic, I am a bit of a novice to this scope of work. I am creating this school project to interface a neural network and a PLC. I also have a Rasberry Pi that I can possibly use to run the Neural Network but that is another area with a learning curve for me lol.
Enter it the serial connection timeout.
ser=serial.Serial(port='comX',baudrate=9600,timeout=.1) #timeout - float type
I think no use read(), rather readline(). This data (if intiger) can you convert.
#write this in while
try:
dat=ser.readline()
try:
datInt=int(dat)
except:
print('Convert fail')#often readed b'', and may not converted
except:
print('Serial connecting fail')
This datInt value is the writed intiger.
If not intiger, can you string method, and convert with more command.
If you will list or more data read, write here a message.
Good luck!
(sory my english is not perfect)

How to get video streaming data in Apache nifi?

Could you please tell me, is there a way to get video streaming into Nifi? I am using Nifi "ExecuteProcess" to run python and OpenCV in order to capture the video stream but I am not able to make multiple flow files. My programs run the python code and then put all the captured frames in a single flow file. As I want to do analysis using YOLOV3 on live videos, I want each frame in a different flow file. The code is given below. My program runs the python code using "ExecuteProcess" and when the program finishes it stores everything that is on the output console to a single flow file but I want multiple flow files.
If there is a better way to get live video streaming using Nifi, please share it.
import cv2
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
print(jp)
cap.release()
cv2.destroyAllWindows()
i think the simplest way to make streaming:
1/ setup ListenHTTP processor that will receive video parts through incoming POST requests. and this will become your data ingestion point.
2/ add to your python script http-post method call instead of print like this:
import cv2
import requests
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
# PORT and BASE_PATH must be configured in ListenHTTP processor
requests.post('http://localhost:PORT/BASE_PATH', data = jp)
cap.release()
cv2.destroyAllWindows()

OpenCV acquisition error with USB camera, but the VideoCapture instance is still opened

I have the following script:
import cv2
cap = cv2.VideoCapture(INDEX)
while cap.isOpened():
retval, frame = cap.read()
# do stuff
The script runs on a ARM64 board with Ubuntu 18 and a USB camera. Everything works fine but sometimes the script crashes with the following error:
/build/opencv-XDqSFW/opencv-3.2.0+dfsg/modules/imgcodecs/src/loadsave.cpp:637: error: (-215) !buf.empty() && buf.isContinuous() in function imdecode_
It seems that no data is being sent by the camera anymore (but the isOpened condition still holds). Is there a way to fix the issue without disconnecting and reconnecting the camera?
EDIT: the error is caused by the retval, frame = cap.read() line.

v4l2 device left and right video streams of stereo camera using gstreamer

I am opening my v4l2 device and both left and right streams are joined and opened at the same time, is there a way to split left and right sensor image frames and display them simultaneously using gstreamer?
EDIT1:
OK, So, I have a v4l2 device and a stereo camera which has both left and right streams writing to /dev/video0 and using gstreamer,I was able to view both the frames, I would like to know how to split left frame and right frame and display in separate windows. Also, I am trying out this script in opencv too, where I am only getting right video stream, I want to be able to view both streams in separate windows either in opencv or using gstreamer.
The below is openCV one
import os
import v4l2capture
import select
import numpy as np
import cv2
video = cv2.VideoCapture("/dev/video0")
while(True):
ret,frame = video.read()
cv2.imshow('whatever',frame)
key = cv2.waitKey(1) & 0xFF
if(key == ord("q")):
break
video.release()
cv2.destroyAllWindows()
The normal gstreamer application is just using a source and sink
gst-launch-1.0 v4l2src ! xvimagesink
I was able to get the answer to my question here,
https://www.stereolabs.com/docs/opencv/python/

Categories