v4l2 device left and right video streams of stereo camera using gstreamer - python

I am opening my v4l2 device and both left and right streams are joined and opened at the same time, is there a way to split left and right sensor image frames and display them simultaneously using gstreamer?
EDIT1:
OK, So, I have a v4l2 device and a stereo camera which has both left and right streams writing to /dev/video0 and using gstreamer,I was able to view both the frames, I would like to know how to split left frame and right frame and display in separate windows. Also, I am trying out this script in opencv too, where I am only getting right video stream, I want to be able to view both streams in separate windows either in opencv or using gstreamer.
The below is openCV one
import os
import v4l2capture
import select
import numpy as np
import cv2
video = cv2.VideoCapture("/dev/video0")
while(True):
ret,frame = video.read()
cv2.imshow('whatever',frame)
key = cv2.waitKey(1) & 0xFF
if(key == ord("q")):
break
video.release()
cv2.destroyAllWindows()
The normal gstreamer application is just using a source and sink
gst-launch-1.0 v4l2src ! xvimagesink

I was able to get the answer to my question here,
https://www.stereolabs.com/docs/opencv/python/

Related

python opencv blocked image icon showing when reading image

I have a small OpenCV script that works with my windows laptop camera but doesn't work when I plug in an external Logitech C922 Pro HD stream webcam.
I have changed the numbers around in cap = cv2.VideoCapture(xxxNUMBERxxx) to open the camera with no luck. I have also installed the Logitech drivers with no luck. The rest of the internet seems to think it's a permissions issue, but I can't find any specific permission settings so I'm not sure that's the issue. I just want to be able to get a video stream from the camera.
The script is below:
import numpy as np
import cv2
cap = cv2.VideoCapture(2, cv2.CAP_DSHOW)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Turns out to be a strange and silly solution. After downloading the 'Logi Capture' app, I had to keep the app running with the video streaming through this app whilst opening up the OpenCV app. Seems like a workaround instead of a solution but it works for me.
The solution is to completely uninstall the Logi Capture application.

How to get video streaming data in Apache nifi?

Could you please tell me, is there a way to get video streaming into Nifi? I am using Nifi "ExecuteProcess" to run python and OpenCV in order to capture the video stream but I am not able to make multiple flow files. My programs run the python code and then put all the captured frames in a single flow file. As I want to do analysis using YOLOV3 on live videos, I want each frame in a different flow file. The code is given below. My program runs the python code using "ExecuteProcess" and when the program finishes it stores everything that is on the output console to a single flow file but I want multiple flow files.
If there is a better way to get live video streaming using Nifi, please share it.
import cv2
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
print(jp)
cap.release()
cv2.destroyAllWindows()
i think the simplest way to make streaming:
1/ setup ListenHTTP processor that will receive video parts through incoming POST requests. and this will become your data ingestion point.
2/ add to your python script http-post method call instead of print like this:
import cv2
import requests
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
# PORT and BASE_PATH must be configured in ListenHTTP processor
requests.post('http://localhost:PORT/BASE_PATH', data = jp)
cap.release()
cv2.destroyAllWindows()

Opencv works with raspberry pi locally but not thru network

I have the basic python program running on my raspberry pi. It works with no issues. But I was trying to log in to the pi thru another computer like SSH and run the same python program it gives me errors like the video isn't working. What am I doing wrong. Do I need to force the RPI to use its monitor to get this work or what? Any suggestions?
I get this error:
Unable to init server: Could not connect: Connection refused
(Video:1363): Gtk-WARNING **: cannot open display:
This is my program:
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# grab the raw NumPy array representing the image, then initialize the timestamp
# and occupied/unoccupied text
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
export DISPLAY=:0 and then run your code.
If you run it without above code, it will run it in the current shell. In simple words, when you export display, you are connecting to a current instance of running raspberry session, and it will act like you are running it locally.
NOTE In some cases, you need to export DISPLAY=:0.0. Just google it for your os.

How to pipe live video frames from ffmpeg to PIL?

I need to use ffmpeg/avconv to pipe jpg frames to a python PIL (Pillow) Image object, using gst as an intermediary*. I've been searching everywhere for this answer without much luck. I think I'm close - but I'm stuck. Using Python 2.7
My ideal pipeline, launched from python, looks like this:
ffmpeg/avconv (as h264 video)
Piped ->
gst-streamer (frames split into jpg)
Piped ->
Pil Image Object
I have the first few steps under control as a single command that writes .jpgs to disk as furiously fast as the hardware will allow.
That command looks something like this:
command = [
"ffmpeg",
"-f video4linux2",
"-r 30",
"-video_size 1280x720",
"-pixel_format 'uyvy422'",
"-i /dev/video0",
"-vf fps=30",
"-f H264",
"-vcodec libx264",
"-preset ultrafast",
"pipe:1 -",
"|", # Pipe to GST
"gst-launch-1.0 fdsrc !",
"video/x-h264,framerate=30/1,stream-format=byte-stream !",
"decodebin ! videorate ! video/x-raw,framerate=30/1 !",
"videoconvert !",
"jpegenc quality=55 !",
"multifilesink location=" + Utils.live_sync_path + "live_%04d.jpg"
]
This will successfully write frames to disk if ran with popen or os.system.
But instead of writing frames to disk, I want to capture the output in my subprocess pipe and read the frames, as they are written, in a file-like buffer that can then be read by PIL.
Something like this:
import subprocess as sp
import shlex
import StringIO
clean_cmd = shlex.split(" ".join(command))
pipe = sp.Popen(clean_cmd, stdout = sp.PIPE, bufsize=10**8)
while pipe:
raw = pipe.stdout.read()
buff = StringIO.StringIO()
buff.write(raw)
buff.seek(0)
# Open or do something clever...
im = Image.open(buff)
im.show()
pipe.flush()
This code doesn't work - I'm not even sure I can use "while pipe" in this way. I'm fairly new to using buffers and piping in this way.
I'm not sure how I would know that an image has been written to the pipe or when to read the 'next' image.
Any help would be greatly appreciated in understanding how to read the images from a pipe rather than to disk.
This is ultimately a Raspberry Pi 3 pipeline and in order to increase my frame rates I can't (A) read/write to/from disk or (B) use a frame by frame capture method - as opposed to running H246 video directly from the camera chip.
I assume the ultimate goal is to handle a USB camera at a high frame rate on Linux, and the following addresses this question.
First, while a few USB cameras support H.264, the Linux driver for USB cameras (UVC driver) currently does not support stream-based payloads, which includes H.264, see "UVC Feature" table on the driver home page. User space tools like ffmpeg use the driver, so have the same limitations regarding what video format is used for the USB transfer.
The good news is that if a camera supports H.264, it almost certainly supports MJPEG, which is supported by the UVC driver and compresses well enough to support 1280x720 at 30 fps over USB 2.0. You can list the video formats supported by your camera using v4l2-ctl -d 0 --list-formats-ext. For a Microsoft Lifecam Cinema, e.g., 1280x720 is supported at only 10 fps for YUV 4:2:2 but at 30 fps for MJPEG.
For reading from the camera, I have good experience with OpenCV. In one of my projects, I have 24(!) Lifecams connected to a single Ubuntu 6-core i7 machine, which does real-time tracking of fruit flies using 320x240 at 7.5 fps per camera (and also saves an MJPEG AVI for each camera to have a record of the experiment). Since OpenCV directly uses the V4L2 APIs, it should be faster than a solution using ffmpeg, gst-streamer, and two pipes.
Bare bones (no error checking) code to read from the camera using OpenCV and create PIL images looks like this:
import cv2
from PIL import Image
cap = cv2.VideoCapture(0) # /dev/video0
while True:
ret, frame = cap.read()
if not ret:
break
pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
... # do something with PIL image
Final note: you likely need to build the v4l version of OpenCV to get compression (MJPEG), see this answer.

grab frame NTSCtoUSB dongle, opencv2, python wrapper

Context:
I have been playing around with python's wrapper for opencv2.
I wanted to play with a few ideas and use a wide angle camera similar to 'rear view' cameras in cars.
I got one from a scrapped crash car (its got 4 wires) I took an educated guess from the wires color codding, connect it up so that I power the power and ground line from a usb type A and feed the NTSC composite+ composite- from an RCA connector.
I bought a NTSC to usb converter like this one.
It came with drivers and some off the shelf VHStoDVD software.
the problem:
I used the run of the mill examples online to trial test it like this:
import numpy as np
import cv2
cam_index=0
cap=cv2.VideoCapture(cam_index)
print cap.isOpened()
ret, frame=cap.read()
#print frame.shape[0]
#print frame.shape[1]
while (cap.isOpened()):
ret, frame=cap.read()
#gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#release and close
cap.release()
cv2.destroyAllWindows()
this is the output from shell:
True
Traceback (most recent call last):
File "C:/../cam_capture_.py", line 19, in <module>
cv2.imshow('frame', frame)
error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow
>>>
key Observations:
SCREENSHOTS
in control panel the usb dongle is shown as 'OEM capture' in Sound Video & Game controllers . So it's not seen as a simple plug and play Webcam in 'Imaging devices'
If I open the VHStoDVD software I need to configure 2 aspects:
set as Composite
set enconding as NTSC
then the camera feed from the analog camera is shown OK within the VHStoDVD application
When I open the device video channel in FLV (device capture). The device stream is just a black screen but IF i open the VHStoDVD software WHILE flv is streaming I get the camera's feed to stream on FLV and a black screen is shown on the VHStoDVD feed. Another important difference is that there is huge latency of aprox 0.5sec when the feed is in FLV as opposed to running in VHStoDVD.
When running "cam_capture.py" as per the sample code above at some put during runtime i will eventually get a stop error code 0x0000008e:
detail:
stop: 0x0000008E (0xC0000005, 0xB8B5F417, 0X9DC979F4, 0X00000000 )
ks.sys - Address B8B5F417 base at B8B5900, Datestamp...
beg mem dump
phy mem dump complete
5.if i try to print frame.shape[0] or frame.shape[1] I get a type error say I cannot print type None
6.if try other cam_index the result is always false
TLDR:
In 'control panel' the camera device is under 'sound video & game controllers' not under 'imaging devices';
The cam_index==zero;
The capture.isOpened()=True;
The frame size is None;
If VHStoDVD is running with composite NTSC configured the camera works , obviously you cant see the image with printscreen in attachment but trust me ! ;)
Is there any form of initialisation of the start of communication with the dongle that could fix this i.e. emulate VHStoDVD settings (composite+NTSC)? I thought I could buspirate the start of comms between VHStoDVD and the dongle but it feels like I am going above and beyond to do something I thought was a key turn solution.
Any constructive insights, suggestion , corrections are most welcome!
Thanks
Cheers
Ok , so after deeper investigation the initial suspicion was confirmed i.e. because the NTSC dongle is not handled as an imaging device (it's seen as a Video Controller , so similar to an emulation of a TV Tuner card ) it means that although we are able to call cv2.VideoCapture with cam_index=0 the video channel itself is not transmitting because we are required to define a bunch of parameters
encoding
frame size
fps rate etc
The problem is because the device is not supported as an imaging device calling cv2.VideoCapture.set(parameter, value) doesn't seem to change anything on the original video feed.
I didn't find a solution but I found a work around. There seems to be quite a few options online. Search for keywords DV to webcam or camcorder as a webcam.
I used DVdriver (http://www.trackerpod.com/TCamWeb/download.htm) (i used the trial because I am cheap!).
Why does it work?
As much as I can tell DVdriver receives the data from the device which is set as a Video controller (similar to a capture from "Windows Movie Maker" or ffmpeg) and then through "fairydust" outputs the frames on cam_index=0 (assumed no other cam connected) as an 'imaging device' webcam.
Summary
TLDR use DVdriver or similar.
I found a workaround but I would really like to understand it from first principles and possible generate a similar initialisation of the NTSC dongle from within python, without any other software dependencies but until then, hopefully this will help others who were also struggling or assuming it was a hardware issue.
I will now leave you with some Beckett:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. (!)
It's a few months late, but might be useful. I was working on a Windows computer and had installed the drivers that came with the device, I tried the same code as your question with an Ezcap from Somagic and got the same error. Since "frame is None," I decided to try an if statement around it - in case it was an initialization error. Placing into the loop:
if frame is None:
print 0
else:
print 1
The result is: 01110111111111111111111111111...
And if the frame = cap.read(), above the loop is commented out - I get: 00111111111111111...
So for my device capture device it appears to be working for all frames beyond the 5th are captured. I'm not sure why this is, but it might be a useful work around for now.
Disclaimer: Unfortunately, my camera input is currently in a radiation field so I can't get to it for a couple of weeks to make sure it works for sure. However, the images are currently a black frame (which is expected without proper input).
I faced the same issue. As a workaround, I first tried the solution proposed by #user3380927 and it worked indeed. But since I didn't want to rely on an external software, I started tweaking parameters using opencv in Python.
This lines of code worked like a charm (you have to insert them before reading the frame for the first time):
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
So, the full code for basic camera reading is as follows:
import cv2
cam = cv2.VideoCapture(1)
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
while(True):
ret,frame = cam.read()
cv2.imshow('frame',frame)
if (cv2.waitKey(1) & 0xFF == ord('q')):
break
cam.release()
cv2.destroyAllWindows()
You can then apply image processing operations as usual. Just for reference, this was my configuration:
Opencv 3.1.0
Python 2.7.5
Windows 8.1
Elgato Video Capture device (this was also shown as Sound Video & Game controllers)

Categories