My Opencv is version 4.5.4 and compiled with gstreamer lib.
In my situation, I'm getting frames of streaming videos. There is the gap between streaming video time and getting video time(cv2.VideoCapture). And if though streaming next video, cv2.VideoCapture don't get frames of next streaming video because opencv didn't reach last frame.
How do i solve it?
(When I use opencv version 3.4.0, i didn't face with this issue.)
My Code(not use mulitprocess/thread)
def connect_streaming(rtsp_url):
while True:
video_cap = cv2.VideoCapture(rtsp_url, cv2.CAP_GSTREAMER)
while video_cap.isOpened():
ret, frame = video_cap.read()
if not ret:
video_cap.release()
break
...
I solved it.
I think it is best way to use thread for getting frames of stream video.
I hope many people, faced with same issue, to solve problem after read it.
Related
Could you please tell me, is there a way to get video streaming into Nifi? I am using Nifi "ExecuteProcess" to run python and OpenCV in order to capture the video stream but I am not able to make multiple flow files. My programs run the python code and then put all the captured frames in a single flow file. As I want to do analysis using YOLOV3 on live videos, I want each frame in a different flow file. The code is given below. My program runs the python code using "ExecuteProcess" and when the program finishes it stores everything that is on the output console to a single flow file but I want multiple flow files.
If there is a better way to get live video streaming using Nifi, please share it.
import cv2
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
print(jp)
cap.release()
cv2.destroyAllWindows()
i think the simplest way to make streaming:
1/ setup ListenHTTP processor that will receive video parts through incoming POST requests. and this will become your data ingestion point.
2/ add to your python script http-post method call instead of print like this:
import cv2
import requests
cap = cv2.VideoCapture("videolink.mp4")
while (True):
ret, image = cap.read()
et, jpeg = cv2.imencode('.jpg', image)
jp = jpeg.tobytes()
# PORT and BASE_PATH must be configured in ListenHTTP processor
requests.post('http://localhost:PORT/BASE_PATH', data = jp)
cap.release()
cv2.destroyAllWindows()
I get video streams from ip cameras using rtsp in python and want to get absolute timestamp for each frame in the sender reports. If I read a stream the camtime obviously starts from zero:
ret, image_np = cap.read()
camtime = cap.get(cv2.CAP_PROP_POS_MSEC)/1000.
I need to know when the frames are recorded since the cameras are located in different time zones and the frames are received with different delays. Can someone advise what library to use and how the python code should look like?
Thanks!
P.S. I can get camera streaming start time from ffmpeg in SDP (option "o").
I try to get images from webcam wtih opencv and python. Code is so basic like:
import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT,480)
cap.set(cv2.cv.CV_CAP_PROP_FPS, 20)
a=30
t=time.time()
while (a>0):
now=time.time()
print now-t
t=now
ret,frame=cap.read()
#Some processes
print a,ret
print frame.shape
a=a-1
k=cv2.waitKey(20)
if k==27:
break
cv2.destroyAllWindows()
But it works slowly. output of program:
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
HIGHGUI ERROR: V4L: Property <unknown property string>(5) not supported by device
8.82148742676e-06
select timeout
30 True
(480, 640, 3)
2.10035800934
select timeout
29 True
(480, 640, 3)
2.06729602814
select timeout
28 True
(480, 640, 3)
2.07144904137
select timeout
Configuration:
Beaglebone Black RevC
Debian-wheezly
opencv 2.4
python 2.7
The "secret" to obtaining higher FPS when processing video streams with OpenCV is to move the I/O (i.e., the reading of frames from the camera sensor) to a separate thread.
When calling read() method along with cv2.VideoCapture function, it makes the entire process very slow as it has to wait for each I/O operation to be completed for it to move on to the next one (Blocking Process).
In order to accomplish this FPS increase/latency decrease, our goal is to move the reading of frames from a webcam or USB device to an entirely different thread, totally separate from our main Python script.
This will allow frames to be read continuously from the I/O thread, all while our root thread processes the current frame. Once the root thread has finished processing its frame, it simply needs to grab the current frame from the I/O thread. This is accomplished without having to wait for blocking I/O operations.
You can read Increasing webcam FPS with Python and OpenCV to know the steps in implementing threads.
EDIT
Based on the discussions in our comments, I feel you could rewrite the code as follows:
import cv2
cv2.namedWindow("output")
cap = cv2.VideoCapture(0)
if cap.isOpened(): # Getting the first frame
ret, frame = cap.read()
else:
ret = False
while ret:
cv2.imshow("output", frame)
ret, frame = cap.read()
key = cv2.waitKey(20)
if key == 27: # exit on Escape key
break
cv2.destroyWindow("output")
I encountered a similar problem when I was working on a project using OpenCV 2.4.9 on the Intel Edison platform. Before doing any processing, it was taking roughly 80ms just to perform the frame grab. It turns out that OpenCV's camera capture logic for Linux doesn't seem to be implemented properly, at least in the 2.4.9 release. The underlying driver only uses one buffer, so it's not possible to use multi-threading in the application layer to work around it - until you attempt to grab the next frame, the only buffer in the V4L2 driver is locked.
The solution is to not use OpenCV's VideoCapture class. Maybe it was fixed to use a sensible number of buffers at some point, but as of 2.4.9, it wasn't. In fact, if you look at this article by the same author as the link provided by #Nickil Maveli, you'll find that as soon as he provides suggestions for improving the FPS on a Raspberry Pi, he stops using OpenCV's VideoCapture. I don't believe that is a coincidence.
Here's my post about it on the Intel Edison forum: https://communities.intel.com/thread/58544.
I basically wound up writing my own class to handle the frame grabs, directly using V4L2. That way you can provide a circular list of buffers and allow the frame grabbing and application logic to be properly decoupled. That was done in C++ though, for a C++ application. Assuming the above link delivers on its promises, that might be a far easier approach. I'm not sure whether it would work on BeagleBone, but maybe there's something similar to PiCamera out there. Good luck.
EDIT: I took a look at the source code for 2.4.11 of OpenCV. It looks like they now default to using 4 buffers, but you must be using V4L2 to take advantage of that. If you look closely at your error message HIGHGUI ERROR: V4L: Property..., you see that it references V4L, not V4L2. That means that the build of OpenCV you're using is falling back on the old V4L driver. In addition to the singular buffer causing performance issues, you're using an ancient driver that probably has many limitations and performance problems of its own.
Your best bet would be to build OpenCV yourself to make sure that it uses V4L2. If I recall correctly, the OpenCV configuration process checks whether the V4L2 drivers are installed on the machine and builds it accordingly, so you'll want to make sure that V4L2 and any related dev packages are installed on the machine you use to build OpenCV.
try this one ! I replaced some code in the cap.set() section
import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
cap.set(5, 20)
a=30
t=time.time()
while (a>0):
now=time.time()
print now-t
t=now
ret,frame=cap.read()
#Some processes
print a,ret
print frame.shape
a=a-1
k=cv2.waitKey(20)
if k==27:
break
cv2.destroyAllWindows()
output (pc webcam) your code was wrong for me.
>>0.0
>>30 True
>>(480, 640, 3)
>>0.246999979019
>>29 True
>>(480, 640, 3)
>>0.0249998569489
>>28 True
>>(480, 640, 3)
>>0.0280001163483
>>27 True
>>(480, 640, 3)
>>0.0320000648499
Context:
I have been playing around with python's wrapper for opencv2.
I wanted to play with a few ideas and use a wide angle camera similar to 'rear view' cameras in cars.
I got one from a scrapped crash car (its got 4 wires) I took an educated guess from the wires color codding, connect it up so that I power the power and ground line from a usb type A and feed the NTSC composite+ composite- from an RCA connector.
I bought a NTSC to usb converter like this one.
It came with drivers and some off the shelf VHStoDVD software.
the problem:
I used the run of the mill examples online to trial test it like this:
import numpy as np
import cv2
cam_index=0
cap=cv2.VideoCapture(cam_index)
print cap.isOpened()
ret, frame=cap.read()
#print frame.shape[0]
#print frame.shape[1]
while (cap.isOpened()):
ret, frame=cap.read()
#gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#release and close
cap.release()
cv2.destroyAllWindows()
this is the output from shell:
True
Traceback (most recent call last):
File "C:/../cam_capture_.py", line 19, in <module>
cv2.imshow('frame', frame)
error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow
>>>
key Observations:
SCREENSHOTS
in control panel the usb dongle is shown as 'OEM capture' in Sound Video & Game controllers . So it's not seen as a simple plug and play Webcam in 'Imaging devices'
If I open the VHStoDVD software I need to configure 2 aspects:
set as Composite
set enconding as NTSC
then the camera feed from the analog camera is shown OK within the VHStoDVD application
When I open the device video channel in FLV (device capture). The device stream is just a black screen but IF i open the VHStoDVD software WHILE flv is streaming I get the camera's feed to stream on FLV and a black screen is shown on the VHStoDVD feed. Another important difference is that there is huge latency of aprox 0.5sec when the feed is in FLV as opposed to running in VHStoDVD.
When running "cam_capture.py" as per the sample code above at some put during runtime i will eventually get a stop error code 0x0000008e:
detail:
stop: 0x0000008E (0xC0000005, 0xB8B5F417, 0X9DC979F4, 0X00000000 )
ks.sys - Address B8B5F417 base at B8B5900, Datestamp...
beg mem dump
phy mem dump complete
5.if i try to print frame.shape[0] or frame.shape[1] I get a type error say I cannot print type None
6.if try other cam_index the result is always false
TLDR:
In 'control panel' the camera device is under 'sound video & game controllers' not under 'imaging devices';
The cam_index==zero;
The capture.isOpened()=True;
The frame size is None;
If VHStoDVD is running with composite NTSC configured the camera works , obviously you cant see the image with printscreen in attachment but trust me ! ;)
Is there any form of initialisation of the start of communication with the dongle that could fix this i.e. emulate VHStoDVD settings (composite+NTSC)? I thought I could buspirate the start of comms between VHStoDVD and the dongle but it feels like I am going above and beyond to do something I thought was a key turn solution.
Any constructive insights, suggestion , corrections are most welcome!
Thanks
Cheers
Ok , so after deeper investigation the initial suspicion was confirmed i.e. because the NTSC dongle is not handled as an imaging device (it's seen as a Video Controller , so similar to an emulation of a TV Tuner card ) it means that although we are able to call cv2.VideoCapture with cam_index=0 the video channel itself is not transmitting because we are required to define a bunch of parameters
encoding
frame size
fps rate etc
The problem is because the device is not supported as an imaging device calling cv2.VideoCapture.set(parameter, value) doesn't seem to change anything on the original video feed.
I didn't find a solution but I found a work around. There seems to be quite a few options online. Search for keywords DV to webcam or camcorder as a webcam.
I used DVdriver (http://www.trackerpod.com/TCamWeb/download.htm) (i used the trial because I am cheap!).
Why does it work?
As much as I can tell DVdriver receives the data from the device which is set as a Video controller (similar to a capture from "Windows Movie Maker" or ffmpeg) and then through "fairydust" outputs the frames on cam_index=0 (assumed no other cam connected) as an 'imaging device' webcam.
Summary
TLDR use DVdriver or similar.
I found a workaround but I would really like to understand it from first principles and possible generate a similar initialisation of the NTSC dongle from within python, without any other software dependencies but until then, hopefully this will help others who were also struggling or assuming it was a hardware issue.
I will now leave you with some Beckett:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. (!)
It's a few months late, but might be useful. I was working on a Windows computer and had installed the drivers that came with the device, I tried the same code as your question with an Ezcap from Somagic and got the same error. Since "frame is None," I decided to try an if statement around it - in case it was an initialization error. Placing into the loop:
if frame is None:
print 0
else:
print 1
The result is: 01110111111111111111111111111...
And if the frame = cap.read(), above the loop is commented out - I get: 00111111111111111...
So for my device capture device it appears to be working for all frames beyond the 5th are captured. I'm not sure why this is, but it might be a useful work around for now.
Disclaimer: Unfortunately, my camera input is currently in a radiation field so I can't get to it for a couple of weeks to make sure it works for sure. However, the images are currently a black frame (which is expected without proper input).
I faced the same issue. As a workaround, I first tried the solution proposed by #user3380927 and it worked indeed. But since I didn't want to rely on an external software, I started tweaking parameters using opencv in Python.
This lines of code worked like a charm (you have to insert them before reading the frame for the first time):
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
So, the full code for basic camera reading is as follows:
import cv2
cam = cv2.VideoCapture(1)
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
while(True):
ret,frame = cam.read()
cv2.imshow('frame',frame)
if (cv2.waitKey(1) & 0xFF == ord('q')):
break
cam.release()
cv2.destroyAllWindows()
You can then apply image processing operations as usual. Just for reference, this was my configuration:
Opencv 3.1.0
Python 2.7.5
Windows 8.1
Elgato Video Capture device (this was also shown as Sound Video & Game controllers)
I'm having problems to get a video stream from a IP camera I have. I'm using opencv to get the images from it. Here's the code i have:
import sys
import cv
video="http://prot-on.dyndns.org:8080/video2.mjpeg"
capture =cv.CaptureFromFile(video)
cv.NamedWindow('Video Stream', 1 )
while True:
# capture the current frame
frame = cv.QueryFrame(capture)
if frame is None:
break
else:
#detect(frame)
cv.ShowImage('Video Stream', frame)
if k == 0x1b: # ESC
print 'ESC pressed. Exiting ...'
break
Actually, this thing works, but it takes too much time to display the images. I'm guessing it's because of this error from ffmpeg.
[mjpeg # 0x8cd0940]max_analyze_duration reached
[mjpeg # 0x8cd0940]Estimating duration from bitrate, this may be inaccurate
I'm not a python expert so any help would be appreciated!
First, mjpeg is a relatively simple video format. If you read your IP camera's manual, it's very like that you can find how to display the video in browser with a bit JavaScript code. In fact, if you open link of http://prot-on.dyndns.org:8080/video2.mjpeg in Google Chrome, you would see the video without any problem. (Maybe you shouldn't leave the real URL of your camera)
Second, as far as I can see, the frame rate of your camera is pretty slow. That might due to Internet latency or your camera's setting. Compare what you see in Chrome with the video displayed by your code, if they are of same quality, then it's not your code's problem.