I created a web application by using Flask, in order to trigger a detection routine through HTTP request.
Basically, every time a GET request is sent to the endpoint URL, I want a function to be executed.
The code I'm using is:
web_app = Flask(__name__)
#web_app.route('/test/v1.0/run', methods=['GET'])
def main():
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
while(True):
ret, frame = cap.read()
***performing operations on each frame and running ML models on them***
return ('', 204)
if __name__ == '__main__':
web_app.run(debug=True)
Everything works fine the first time, if I use:
curl -i http://localhost:5000/test/v1.0/run
the function main() is executed, and at the end the results are uploaded to an online database.
After this, the program keeps listening on the URL. If I send another GET request, main() is executed again, but it hangs after the first iteration of the while loop.
I tried simply running the same code multiple times by placing it in a for loop:
for i in range(10):
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
while(True):
ret, frame = cap.read()
***performing operations on each frame and running ML models on them***
and it works without any problem, so the hanging should not depend on anything I'm doing inside the code.
The problem should be caused by the fact that I'm using flask to trigger the function, but in this case I don't understand why main() hangs after starting. I'm very new to flask and web applications in general, so I'm probably missing something very simple here.
The problem was that I was also displaying the frames collected from camera, using
cv2.imshow("window", frame)
and, even if at the end of the program I was closing everything by:
cap.release()
cv2.destroyAllWindows()
something remained hanging, and it got the process stuck at the next iteration.
I solved by removing the cv2.imshow...I don't really need to visualize the stream, so I can live with it.
Mostly out of curiosity though, I'll try to figure out how to make it work even when visualizing the video frames.
Related
I'm running some Python scripts in a loop (but not so fast).
example:
import time
import cv2
stream = cv2.VideoCapture(0)
while True:
ret,frame = stream.read()
cv2.imshow("Frame",frame)
cv2.waitKey(1)
time.sleep(10)
After some minutes of execution I'm getting this error:
E1205 11:19:12.803174714 32302 backup_poller.cc:132] Run client channel backup poller: {"created":"#1607177952.802346313","description":"pollset_work","file":"src/core/lib/iomgr/ev_epollex_linux.cc","file_line":324,"referenced_errors":[{"created":"#1607177952.802226759","description":"Bad file descriptor","errno":9,"file":"src/core/lib/iomgr/ev_epollex_linux.cc","file_line":954,"os_error":"Bad file descriptor","syscall":"epoll_wait"}]}
Does someone know what is this? I looked into Google but I didn't find any good solutions.
after creating a VideoCapture, always test for stream.isOpened(). if that is False, you can't do anything to the video.
after ret, frame = stream.read(), always test if ret is True. if it's False, the video has ended and you need to end the loop.
I would also recommend not simply sleeping for ten seconds. during that time, the GUI event loop won't run (because you aren't running waitKey), which makes the imshow window unresponsive and your OS might decide to kill the process.
use waitKey(10000) instead. check if waitKey() waited the full interval. it may return earlier because of a key event. you could accept that or repeat waitKey with the remaining time.
a camera will produce frames in regular intervals, no matter if you read them or not. if you don't read them, they queue up. the driver will not like that and may decide to throw errors at you.
I have an RTSP stream streaming to Amazon Kinesis Video Streams. I want to be able to run custom image processing on each frame of the stream. I have the custom image processing algorithm written in python, so naturally I want to be able to pull the KVS with python, and input it to the image processing algorithm. To start, I've tried making a python app which just displays the KVS stream by following advice from this post (however this post uses Amazon Rekognition for custom processing, my use case (custom python processing) is slightly different).
My questions are:
How come my HLS python KVS player is laggy / choppy (randomly pauses, then plays the past few seconds very quickly)? However, the HLS streamer available here looks quite good.
How can you effectively pull an Amazon Kinesis Video Stream for custom python processing? What are the pros/cons of using HLS vs Amazon's GetMedia API?
I also looked into using Amazon's GetMedia API, and found this post where someone attempted to use GetMedia but then ended up going with HLS.
I found the approach to using GetMedia for custom processing explained here (python) and here (c++).
Is it worth using Sagemaker for this? For my image processing, I only want to perform inference, I don't need to train the network, it is already trained.
Code for kvs_player.py:
import boto3
import cv2
STREAM_NAME = "ExampleStream"
STREAM_ARN = "MY_STREAM_ARN"
AWS_REGION = 'us-east-1'
def hls_stream():
kv_client = boto3.client("kinesisvideo", region_name=AWS_REGION)
endpoint = kv_client.get_data_endpoint(
StreamName=STREAM_NAME,
APIName="GET_HLS_STREAMING_SESSION_URL"
)['DataEndpoint']
print(endpoint)
# # Grab the HLS Stream URL from the endpoint
kvam_client = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint, region_name=AWS_REGION)
url = kvam_client.get_hls_streaming_session_url(
StreamName=STREAM_NAME,
PlaybackMode="LIVE"
)['HLSStreamingSessionURL']
vcap = cv2.VideoCapture(url)
while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
if frame is not None:
# Display the resulting frame
cv2.imshow('frame', frame)
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print("Frame is None")
break
# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print("Video stop")
if __name__ == '__main__':
hls_stream()
Some generic answers
You might want to do some debugging into your application to understand what "laggy" means. It could simply be the network latency, errors while transmitting or simply performance issues running decoding, python processing and later re-encoding.
GetMedia is a fast API and the Parser library https://github.com/aws/amazon-kinesis-video-streams-parser-library can be used to do real-time parsing. You might need to decode the video stream which is two orders of magnitude more CPU and IO intensive operation so you might want to use some hardware accelerated decoder (or at least a good software based decoder that takes advantages of SIMD/SSE2 instruction set). JCodec is certainly not the one.
It depends on your application really. At any rate, before the actual Sagemaker run, you need to retrieve the fragments, parse and decode them. There is a KVS Sagemaker KIT sample you can use for more serious inference applications.
For a more scoped and targeted questions you can go directly to GitHub page for the appropriate project/asset in question and cut an issue including detailed description and the verbose logs.
I want to be able to view a queue of opencv frames (from my webcam) in a HTML file.
I have a writer function, which writes the webcams frames to a python queue:
def writer():
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# If a frame is read, add new frame to queue
if ret == True:
q.put(frame)
else:
break
The main part of my code is as follows:
q = queue.Queue()
threading.Thread(target=writer).start()
I now have a queue called q, which I would like to view in a HTML file. What is the best method to do this?
I will assume you know Python and Web Development (HTML, CSS, JavaScript), since you need to create something on both ends.
In your HTML, use JavaScript to continuously do requests for the frames and show them; One easy way to do this is using the ajax $.get() functionality from jQuery to grab the new image and show it, and use the JavaScript setInterval() method to call the function that loads new images every x milliseconds Check This post on reloading an image for simple implementations, many without AJAX, for example:
setInterval(function () {
var d = new Date();
$("#myimg").attr("src", "/myimg.jpg?"+d.getTime());
}, 2000);
The reason I sugest you to do this using AJAX is that you probably want to create a web server in Python, so that your JavaScript can request new frames by consuming a simple REST API that serves the image frames. In Python it's easy to create your web server from scratch, or use libraries like Flask if you need to grow this into something more complex afterwards. From your description, all you need is to create a web server with one service: downloading an image.
A really hacky and slow way of doing this without a web server is to just use imwrite() to write your frames to disk, and use waitKey(milliseconds) in your loop to wait before rewriting the same image file. That way you could use the JavaScript code from above to reload this image.
I currently create website with Flask. After user sign up, i want to send mail & execute insert into sql, but it run slow, so i want to execute it on background and still continue the main flow and return the html without stuck on loading page.
def send_email_and_insert():
# some smtplib code here
# and some insert into sql too
# this function make the web stuck loading 5-10 second
#app.route('/sign_up/')
def sign_up():
# some code here
send_email_and_insert()
return render_template('thanks_for_sign_up.html')
I think it can be done with multiprocessing or threading, but i cant get it right. or maybe there is another solution?
I try to get images from webcam wtih opencv and python. Code is so basic like:
import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT,480)
cap.set(cv2.cv.CV_CAP_PROP_FPS, 20)
a=30
t=time.time()
while (a>0):
now=time.time()
print now-t
t=now
ret,frame=cap.read()
#Some processes
print a,ret
print frame.shape
a=a-1
k=cv2.waitKey(20)
if k==27:
break
cv2.destroyAllWindows()
But it works slowly. output of program:
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
HIGHGUI ERROR: V4L: Property <unknown property string>(5) not supported by device
8.82148742676e-06
select timeout
30 True
(480, 640, 3)
2.10035800934
select timeout
29 True
(480, 640, 3)
2.06729602814
select timeout
28 True
(480, 640, 3)
2.07144904137
select timeout
Configuration:
Beaglebone Black RevC
Debian-wheezly
opencv 2.4
python 2.7
The "secret" to obtaining higher FPS when processing video streams with OpenCV is to move the I/O (i.e., the reading of frames from the camera sensor) to a separate thread.
When calling read() method along with cv2.VideoCapture function, it makes the entire process very slow as it has to wait for each I/O operation to be completed for it to move on to the next one (Blocking Process).
In order to accomplish this FPS increase/latency decrease, our goal is to move the reading of frames from a webcam or USB device to an entirely different thread, totally separate from our main Python script.
This will allow frames to be read continuously from the I/O thread, all while our root thread processes the current frame. Once the root thread has finished processing its frame, it simply needs to grab the current frame from the I/O thread. This is accomplished without having to wait for blocking I/O operations.
You can read Increasing webcam FPS with Python and OpenCV to know the steps in implementing threads.
EDIT
Based on the discussions in our comments, I feel you could rewrite the code as follows:
import cv2
cv2.namedWindow("output")
cap = cv2.VideoCapture(0)
if cap.isOpened(): # Getting the first frame
ret, frame = cap.read()
else:
ret = False
while ret:
cv2.imshow("output", frame)
ret, frame = cap.read()
key = cv2.waitKey(20)
if key == 27: # exit on Escape key
break
cv2.destroyWindow("output")
I encountered a similar problem when I was working on a project using OpenCV 2.4.9 on the Intel Edison platform. Before doing any processing, it was taking roughly 80ms just to perform the frame grab. It turns out that OpenCV's camera capture logic for Linux doesn't seem to be implemented properly, at least in the 2.4.9 release. The underlying driver only uses one buffer, so it's not possible to use multi-threading in the application layer to work around it - until you attempt to grab the next frame, the only buffer in the V4L2 driver is locked.
The solution is to not use OpenCV's VideoCapture class. Maybe it was fixed to use a sensible number of buffers at some point, but as of 2.4.9, it wasn't. In fact, if you look at this article by the same author as the link provided by #Nickil Maveli, you'll find that as soon as he provides suggestions for improving the FPS on a Raspberry Pi, he stops using OpenCV's VideoCapture. I don't believe that is a coincidence.
Here's my post about it on the Intel Edison forum: https://communities.intel.com/thread/58544.
I basically wound up writing my own class to handle the frame grabs, directly using V4L2. That way you can provide a circular list of buffers and allow the frame grabbing and application logic to be properly decoupled. That was done in C++ though, for a C++ application. Assuming the above link delivers on its promises, that might be a far easier approach. I'm not sure whether it would work on BeagleBone, but maybe there's something similar to PiCamera out there. Good luck.
EDIT: I took a look at the source code for 2.4.11 of OpenCV. It looks like they now default to using 4 buffers, but you must be using V4L2 to take advantage of that. If you look closely at your error message HIGHGUI ERROR: V4L: Property..., you see that it references V4L, not V4L2. That means that the build of OpenCV you're using is falling back on the old V4L driver. In addition to the singular buffer causing performance issues, you're using an ancient driver that probably has many limitations and performance problems of its own.
Your best bet would be to build OpenCV yourself to make sure that it uses V4L2. If I recall correctly, the OpenCV configuration process checks whether the V4L2 drivers are installed on the machine and builds it accordingly, so you'll want to make sure that V4L2 and any related dev packages are installed on the machine you use to build OpenCV.
try this one ! I replaced some code in the cap.set() section
import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
cap.set(5, 20)
a=30
t=time.time()
while (a>0):
now=time.time()
print now-t
t=now
ret,frame=cap.read()
#Some processes
print a,ret
print frame.shape
a=a-1
k=cv2.waitKey(20)
if k==27:
break
cv2.destroyAllWindows()
output (pc webcam) your code was wrong for me.
>>0.0
>>30 True
>>(480, 640, 3)
>>0.246999979019
>>29 True
>>(480, 640, 3)
>>0.0249998569489
>>28 True
>>(480, 640, 3)
>>0.0280001163483
>>27 True
>>(480, 640, 3)
>>0.0320000648499