Displaying a video queue buffer in HTML - python

I want to be able to view a queue of opencv frames (from my webcam) in a HTML file.
I have a writer function, which writes the webcams frames to a python queue:
def writer():
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# If a frame is read, add new frame to queue
if ret == True:
q.put(frame)
else:
break
The main part of my code is as follows:
q = queue.Queue()
threading.Thread(target=writer).start()
I now have a queue called q, which I would like to view in a HTML file. What is the best method to do this?

I will assume you know Python and Web Development (HTML, CSS, JavaScript), since you need to create something on both ends.
In your HTML, use JavaScript to continuously do requests for the frames and show them; One easy way to do this is using the ajax $.get() functionality from jQuery to grab the new image and show it, and use the JavaScript setInterval() method to call the function that loads new images every x milliseconds Check This post on reloading an image for simple implementations, many without AJAX, for example:
setInterval(function () {
var d = new Date();
$("#myimg").attr("src", "/myimg.jpg?"+d.getTime());
}, 2000);
The reason I sugest you to do this using AJAX is that you probably want to create a web server in Python, so that your JavaScript can request new frames by consuming a simple REST API that serves the image frames. In Python it's easy to create your web server from scratch, or use libraries like Flask if you need to grow this into something more complex afterwards. From your description, all you need is to create a web server with one service: downloading an image.
A really hacky and slow way of doing this without a web server is to just use imwrite() to write your frames to disk, and use waitKey(milliseconds) in your loop to wait before rewriting the same image file. That way you could use the JavaScript code from above to reload this image.

Related

How to get text in current buffer in neovim using Python API?

I want to make a plugin for neovim using the Python API (pynvim). The problem is I want to get the current buffer's text, updated real time. I have searched on the web and didnt find any useful (or understandable) documentation for this.
You can use pynvim to subscribe to an event in neovim. Do keep in mind that pynvim is async, but my example is using a simple while loop do demonstrate how to monitor for realtime change in a buffer and get its content.
from time import sleep
from pynvim import attach, api
nvim = attach('socket', path='/tmp/nvim')
buffer = nvim.current.buffer
event = api.nvim.Nvim.from_nvim(nvim) # use the loaded nvim session
listen = event.subscribe('TextChangedI') # refer to events https://neovim.io/doc/user/autocmd.html#events
while True:
sleep(2)
print(listen)
# read and print contents of the whole buffer
for line in range(len(buffer)):
print(buffer[line])

How can you effectively pull an Amazon Kinesis Video Stream for custom python processing?

I have an RTSP stream streaming to Amazon Kinesis Video Streams. I want to be able to run custom image processing on each frame of the stream. I have the custom image processing algorithm written in python, so naturally I want to be able to pull the KVS with python, and input it to the image processing algorithm. To start, I've tried making a python app which just displays the KVS stream by following advice from this post (however this post uses Amazon Rekognition for custom processing, my use case (custom python processing) is slightly different).
My questions are:
How come my HLS python KVS player is laggy / choppy (randomly pauses, then plays the past few seconds very quickly)? However, the HLS streamer available here looks quite good.
How can you effectively pull an Amazon Kinesis Video Stream for custom python processing? What are the pros/cons of using HLS vs Amazon's GetMedia API?
I also looked into using Amazon's GetMedia API, and found this post where someone attempted to use GetMedia but then ended up going with HLS.
I found the approach to using GetMedia for custom processing explained here (python) and here (c++).
Is it worth using Sagemaker for this? For my image processing, I only want to perform inference, I don't need to train the network, it is already trained.
Code for kvs_player.py:
import boto3
import cv2
STREAM_NAME = "ExampleStream"
STREAM_ARN = "MY_STREAM_ARN"
AWS_REGION = 'us-east-1'
def hls_stream():
kv_client = boto3.client("kinesisvideo", region_name=AWS_REGION)
endpoint = kv_client.get_data_endpoint(
StreamName=STREAM_NAME,
APIName="GET_HLS_STREAMING_SESSION_URL"
)['DataEndpoint']
print(endpoint)
# # Grab the HLS Stream URL from the endpoint
kvam_client = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint, region_name=AWS_REGION)
url = kvam_client.get_hls_streaming_session_url(
StreamName=STREAM_NAME,
PlaybackMode="LIVE"
)['HLSStreamingSessionURL']
vcap = cv2.VideoCapture(url)
while(True):
# Capture frame-by-frame
ret, frame = vcap.read()
if frame is not None:
# Display the resulting frame
cv2.imshow('frame', frame)
# Press q to close the video windows before it ends if you want
if cv2.waitKey(22) & 0xFF == ord('q'):
break
else:
print("Frame is None")
break
# When everything done, release the capture
vcap.release()
cv2.destroyAllWindows()
print("Video stop")
if __name__ == '__main__':
hls_stream()
Some generic answers
You might want to do some debugging into your application to understand what "laggy" means. It could simply be the network latency, errors while transmitting or simply performance issues running decoding, python processing and later re-encoding.
GetMedia is a fast API and the Parser library https://github.com/aws/amazon-kinesis-video-streams-parser-library can be used to do real-time parsing. You might need to decode the video stream which is two orders of magnitude more CPU and IO intensive operation so you might want to use some hardware accelerated decoder (or at least a good software based decoder that takes advantages of SIMD/SSE2 instruction set). JCodec is certainly not the one.
It depends on your application really. At any rate, before the actual Sagemaker run, you need to retrieve the fragments, parse and decode them. There is a KVS Sagemaker KIT sample you can use for more serious inference applications.
For a more scoped and targeted questions you can go directly to GitHub page for the appropriate project/asset in question and cut an issue including detailed description and the verbose logs.

Python Flask web application hanging when called multiple times

I created a web application by using Flask, in order to trigger a detection routine through HTTP request.
Basically, every time a GET request is sent to the endpoint URL, I want a function to be executed.
The code I'm using is:
web_app = Flask(__name__)
#web_app.route('/test/v1.0/run', methods=['GET'])
def main():
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
while(True):
ret, frame = cap.read()
***performing operations on each frame and running ML models on them***
return ('', 204)
if __name__ == '__main__':
web_app.run(debug=True)
Everything works fine the first time, if I use:
curl -i http://localhost:5000/test/v1.0/run
the function main() is executed, and at the end the results are uploaded to an online database.
After this, the program keeps listening on the URL. If I send another GET request, main() is executed again, but it hangs after the first iteration of the while loop.
I tried simply running the same code multiple times by placing it in a for loop:
for i in range(10):
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
while(True):
ret, frame = cap.read()
***performing operations on each frame and running ML models on them***
and it works without any problem, so the hanging should not depend on anything I'm doing inside the code.
The problem should be caused by the fact that I'm using flask to trigger the function, but in this case I don't understand why main() hangs after starting. I'm very new to flask and web applications in general, so I'm probably missing something very simple here.
The problem was that I was also displaying the frames collected from camera, using
cv2.imshow("window", frame)
and, even if at the end of the program I was closing everything by:
cap.release()
cv2.destroyAllWindows()
something remained hanging, and it got the process stuck at the next iteration.
I solved by removing the cv2.imshow...I don't really need to visualize the stream, so I can live with it.
Mostly out of curiosity though, I'll try to figure out how to make it work even when visualizing the video frames.

How to do real-time video streaming frame by frame in python?

What I need is to readin one frame of a video, then send this frame as a frame in video, like P-frame, to the server, and server decode it based on the information of the frames it received before this frame.(So only the first frame is sent as complete I-frame, the rest are just P-frame to save bandwidth)
Then I need to do some real time operation on this frame.
But I try ffmpeg's and opencv's built in streaming protocal, it can only save a complete video file. But I need to do some real time manipulate as soon as I receive one fram, so I cant wait till I have received the whole file.
So what package should I use? Or there is not existing lib/software that could do this?

Asynchronous image load in pygame

I am writing a small application in python 2.7 using pygame in which I would like to smoothly display animated tiles on screen. I am writing on Ubuntu, but target platform is Raspberry Pi if that's relevant. The challenge is that the textures for these animated tiles are stored on a web server and are to be loaded dynamically over time, not all at once. I'd like to be able to load these images into pygame with no noticeable hitch in my animation or input response. The load frequency is pretty low, like grabbing a couple jpgs every 30 seconds. I am willing to wait a long time for the image to load in the background if it means the main input/animation thread remains unhitched.
So using the multiprocessing module, I am able to download images from a server asynchronously into a buffer, and then pass this buffer to my main pygame process over a multiprocessing.queues.SimpleQueue object. However, once the buffer is accessible in the pygame process, there is still a hitch in my application while the buffer is loaded into a Surface for blitting via pygame.image.frombuffer().
Is there a way to make this pygame.image.load() call asynchronous so that my animation in the game, etc is not blocked? I can't think of an obvious solution due to GIL.
If I was writing a regular OpenGL program in C, I would be able to asynchronously write data to the GPU using a pixel buffer object, correct? Does pygame expose any part of this API by any chance? I can't seem to find it in the pygame docs, which I am pretty new to, so pardon me if the answer is obvious. Any help pointing out how pygame's terminology translates to OpenGL API would be a big help, as well any relevant examples in which pygame can initialize a texture asynchronously would be amazing!
If pygame doesn't offer this functionality, what are my options? Is there a way to do this is with PySDL2?
EDIT: Ok, so I tried using pygame.image.frombuffer, and it doesn't really cut down on the hitching I am seeing. Any ideas of how I can make this image load truly asynchronous? Here is some code snippets illustrating what I am currently doing.
Here's the async code I have that sits in a separate process from pygame
def _worker(in_queue, out_queue):
done = False
while not done:
if not in_queue.empty():
obj = in_queue.get()
# if a bool is passed down the queue, set the done flag
if isinstance(obj, bool):
done = obj
else:
url, w, h = obj
# grab jpg at given url. It is compressed so we use PIL to parse it
jpg_encoded_str = urllib2.urlopen(url).read()
# PIL.ImageFile
parser = ImageFile.Parser()
parser.feed(jpg_encoded_str)
pil_image = parser.close()
buff = pil_image.tostring()
# place decompressed buffer into queue for consumption by main thread
out_queue.put((url, w, h, buff))
# and so I create a subprocess that runs _worker function
Here's my update loop that runs in the main thread. It looks to see if the _Worker process has put anything into the out_queue, and if so loads it into pygame:
def update():
if not out_queue.empty():
url, w, h, buff = img_buffer_queue.get()
# This call is where I get a hitch in my application
image = pygame.image.frombuffer(buff, (w, h), "RGB")
# Place loaded image on the pygame.Sprite, etc in the callback
callback = on_load_callbacks.pop(url, None)
if callback:
callback(image, w, h)
Perhaps you could try converting the image from a buffer object to a more efficient object in the worker function, perhaps a surface, and then send that computed surface to the out queue?
This way, generation of the surface object won't block the main thread.

Categories