Asynchronous image load in pygame - python

I am writing a small application in python 2.7 using pygame in which I would like to smoothly display animated tiles on screen. I am writing on Ubuntu, but target platform is Raspberry Pi if that's relevant. The challenge is that the textures for these animated tiles are stored on a web server and are to be loaded dynamically over time, not all at once. I'd like to be able to load these images into pygame with no noticeable hitch in my animation or input response. The load frequency is pretty low, like grabbing a couple jpgs every 30 seconds. I am willing to wait a long time for the image to load in the background if it means the main input/animation thread remains unhitched.
So using the multiprocessing module, I am able to download images from a server asynchronously into a buffer, and then pass this buffer to my main pygame process over a multiprocessing.queues.SimpleQueue object. However, once the buffer is accessible in the pygame process, there is still a hitch in my application while the buffer is loaded into a Surface for blitting via pygame.image.frombuffer().
Is there a way to make this pygame.image.load() call asynchronous so that my animation in the game, etc is not blocked? I can't think of an obvious solution due to GIL.
If I was writing a regular OpenGL program in C, I would be able to asynchronously write data to the GPU using a pixel buffer object, correct? Does pygame expose any part of this API by any chance? I can't seem to find it in the pygame docs, which I am pretty new to, so pardon me if the answer is obvious. Any help pointing out how pygame's terminology translates to OpenGL API would be a big help, as well any relevant examples in which pygame can initialize a texture asynchronously would be amazing!
If pygame doesn't offer this functionality, what are my options? Is there a way to do this is with PySDL2?
EDIT: Ok, so I tried using pygame.image.frombuffer, and it doesn't really cut down on the hitching I am seeing. Any ideas of how I can make this image load truly asynchronous? Here is some code snippets illustrating what I am currently doing.
Here's the async code I have that sits in a separate process from pygame
def _worker(in_queue, out_queue):
done = False
while not done:
if not in_queue.empty():
obj = in_queue.get()
# if a bool is passed down the queue, set the done flag
if isinstance(obj, bool):
done = obj
else:
url, w, h = obj
# grab jpg at given url. It is compressed so we use PIL to parse it
jpg_encoded_str = urllib2.urlopen(url).read()
# PIL.ImageFile
parser = ImageFile.Parser()
parser.feed(jpg_encoded_str)
pil_image = parser.close()
buff = pil_image.tostring()
# place decompressed buffer into queue for consumption by main thread
out_queue.put((url, w, h, buff))
# and so I create a subprocess that runs _worker function
Here's my update loop that runs in the main thread. It looks to see if the _Worker process has put anything into the out_queue, and if so loads it into pygame:
def update():
if not out_queue.empty():
url, w, h, buff = img_buffer_queue.get()
# This call is where I get a hitch in my application
image = pygame.image.frombuffer(buff, (w, h), "RGB")
# Place loaded image on the pygame.Sprite, etc in the callback
callback = on_load_callbacks.pop(url, None)
if callback:
callback(image, w, h)

Perhaps you could try converting the image from a buffer object to a more efficient object in the worker function, perhaps a surface, and then send that computed surface to the out queue?
This way, generation of the surface object won't block the main thread.

Related

Python Tkinter: How to use update() thread safe instead of using mainloop()?

For context why I want to use update() and update_idletask() instead of just using mainloop():
I need to display a cv2.imshow fullscreen FullHD 50 FPS stream, grabbed in a different thread from a Basler Dart USB camera, with a (Tkinter) GUI window topmost .
The conversion from the cv2 Mat to a tk PhotoImage and updating the canvas or label displaying the image is taking over 30ms under best conditions, while grabbing the frame and performing some bitwise operations to overlay an transparent image only takes a few ms. Displaying the stream this way is to slow.
For the sake of not needing to learn another gui framework right now and reusing most of the existing code I found a solution to displaying a cv2.imshow() fullscreen window and simultaneously a tk.Tk window topmost, but I am unsure if this is a good idea and how to implement it the right way, because of the warning in cpython _tkinter.c about the thread lock situation.
I read some suggested solutions for displaying a tk and cv2 window at the same time by using threading, but those didn't work for me, maybe because the image grabbing within my CV2Window is already in a thread.
Just calling update() within the cv2 loop works easy, but I don't know if this is a good idea:
Would it be safe not caring about the tcl lock and just using update(), if I implement the communication between the two windows with a threadsafe queue and nothing within the tkinter events blocks too long?
My simplified code right now is:
# Standard library imports
from cv2 import waitKey
from sys import exit
# Local application imports
from CV2Window import CV2Window
from tkWindows import MenuWindow, MenuFrame
# child class of tk.Tk
# -topmost and overrideredirect True, geometry "+0+0"
class MenuApp(MenuWindow):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# frame withdraws and deiconifies buttons when the menu toggle button is clicked,
# to create a dropdown-menu. Row 0 is used by the toggle button
self.menu_frame = MenuFrame(self)
self.menu_frame.add_button("Example Button", command=None, icon="file-plus", row=1)
self.menu_frame.add_button("Close", command=self.close, icon="power", row=2)
self.menu_frame.grid()
def close(self):
self.destroy()
def main():
cv2window = CV2Window()
gui = MenuApp()
while True:
## Update cv2 window
cv2window.update()
## Update GUI manually
gui.update()
## Check if GUI is still open, otherwise close cv2 window
try:
gui.winfo_exists()
except:
print("GUI closed, closing cv2 window..")
cv2window.close()
break
But the CV2Window contains the camera grabbing with MyCameraHandler, which is a class version of the Basler pypylon example "grabusinggrabloopthread.py" and is aquiring the frames in a different thread.
Simplified code for CV2Window:
# Standard library imports
from cv2 import namedWindow, setWindowProperty, imshow, WINDOW_FULLSCREEN, WND_PROP_FULLSCREEN, waitKey, destroyAllWindows
from pypylon.genicam import GenericException
# Local application imports
from CameraClass import MyCameraHandler
class CV2Window():
def __init__(self):
try:
self.cam_handler = MyCameraHandler()
self.cam_handler.start_grabbing()
except GenericException as e:
print("Exception in CV2Window: ", e)
try:
self.cam_handler.stop()
except:
pass
exit(1)
self.img = self.cam_handler.get_image()
namedWindow("cam", WND_PROP_FULLSCREEN)
setWindowProperty("cam", WND_PROP_FULLSCREEN, WINDOW_FULLSCREEN)
imshow("cam", self.img)
def update(self):
self.img = self.cam_handler.get_image()
waitKey(0)
imshow("cam", self.img)
def close(self):
destroyAllWindows()
self.cam_handler.stop()
By reading the cpython _tkinter.c code this could be a problem:
The threading situation is complicated. Tcl is not thread-safe, except
when configured with --enable-threads.
So we need to use a lock around all uses of Tcl. Previously, the
Python interpreter lock was used for this. However, this causes
problems when other Python threads need to run while Tcl is blocked
waiting for events.
If mainloop() is used, Tkinter will get and release the right locks at the right time, but then it is not possible to display and update my CV2Window. From my understanding, update() is only a single tcl call, without any lock management.
As of right now, it is working, but there is zero communication between the two windows.
I need to be able to invoke methods in CV2Window from the Tkinter GUI, and probably in the future also sharing small data/information from the CV2Window to Tkinter.
The next thing I'll try is communicating with a queue, since I don't need to share the image, only some information or actions to perform, and the queue.Queue is threadsafe... this should work, I think?
As long as the events performed because of the update call are taking less than ~15ms, I should be fine and get my needed frametime of <20ms together with the frame grab and imshow, right?
Am I missing something? I am quite new to Python and Tkinter, and wrapping my head around the tcl stuff invoked by Tkinter isn't that easy for me, so any help would be greatly appreciated.
If you are on Unix, the winfo_id() method of any Tkinter widget will return the XID of the underlying drawing surface. You can use that handle to get some other code to draw on that surface. You are recommended to use a Frame with the background set to either None or the empty string (not quite sure which works!) so that Tkinter won't draw on it at the same time; no other widget class supports that. (I know I'd use an empty string in Tk for that effect, which gets converted into a NULL in the underlying structure and that triggers the frame painting code to do nothing on a redraw/repaint request.) I don't know how you'd get the XID into the other framework; it would be responsible for opening its own connection and setting up listeners, but that would work very well in another thread (or even another process; I've done that in the past). That solution does not work on other platforms, as I believe they use IDs that do not have meaning outside of the toolkit implementation. (Technically, there are a few options in Tk that Tkinter does not expose that would help, possibly, but I'm not sure. The -container/-use protocol is not well documented.)
More generally, a custom image type (pretty much requires writing C or C++ code) would let you do something close, though the whole image system is not really designed for animated streaming media. (It certainly isn't designed for hardware acceleration!) It's not something for a quick fix, and I've never tried doing that myself.

Displaying a video queue buffer in HTML

I want to be able to view a queue of opencv frames (from my webcam) in a HTML file.
I have a writer function, which writes the webcams frames to a python queue:
def writer():
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# If a frame is read, add new frame to queue
if ret == True:
q.put(frame)
else:
break
The main part of my code is as follows:
q = queue.Queue()
threading.Thread(target=writer).start()
I now have a queue called q, which I would like to view in a HTML file. What is the best method to do this?
I will assume you know Python and Web Development (HTML, CSS, JavaScript), since you need to create something on both ends.
In your HTML, use JavaScript to continuously do requests for the frames and show them; One easy way to do this is using the ajax $.get() functionality from jQuery to grab the new image and show it, and use the JavaScript setInterval() method to call the function that loads new images every x milliseconds Check This post on reloading an image for simple implementations, many without AJAX, for example:
setInterval(function () {
var d = new Date();
$("#myimg").attr("src", "/myimg.jpg?"+d.getTime());
}, 2000);
The reason I sugest you to do this using AJAX is that you probably want to create a web server in Python, so that your JavaScript can request new frames by consuming a simple REST API that serves the image frames. In Python it's easy to create your web server from scratch, or use libraries like Flask if you need to grow this into something more complex afterwards. From your description, all you need is to create a web server with one service: downloading an image.
A really hacky and slow way of doing this without a web server is to just use imwrite() to write your frames to disk, and use waitKey(milliseconds) in your loop to wait before rewriting the same image file. That way you could use the JavaScript code from above to reload this image.

How to execute processing task and socket communication concurrently in python 3?

I have some trouble trying to understand how to use the threading module
in python 3.
Origin: I wrote a python script to do some image processing on every
frame of a camera stream in a for loop.
Therefor I wrote some functions which are used inside the main script. The main script/loop isnĀ“t encapsulated inside a function.
Aim: I want the main loop to run the whole time. The result of the
processing of the latest frame have to be send to a socket client only
if the client sends a request to the server socket.
My idea was to use two threads. One for the image processing and one for
the server socket which listens for a request, takes the latest image
processing result and sends it to the client socket.
I saw different tutorials how to use threading and understand the
workflow in general, but not how to use it to cope with this particular
case. So I hope for your help.
Below there is the rough structure of the origin script:
import cv2
import numpy
import json
import socket
from threading import Thread
def crop(image, coords):
...
def cont(image):
...
# load parameters
a = json_data["..."]
# init cam
camera = PiCamers()
# main loop
for frame in camera.capture_continuous(...):
#######
# some image processing
#######
result = (x, y, z)
Thank you in advance for your ideas!
Greetings
Basically you have to create a so called ThreadPool.
In this ThreadPool function you can add the functions you want to be executed in a thread with their specific parameters. Afterwards you can start the Threadpool.
https://www.codementor.io/lance/simple-parallelism-in-python-du107klle
Here the threadPool with .map is used. There are more advanced functions that do the job. You can read the documentary of ThreadPools or search other tutorials.
Hope it helped

How image capturing with Pygame actually works?

My goal is to set up a webcam with Raspberry Pi to be active during a certain time (let's say 2 minutes). During this time I should be able to capture a still image at any time and save that image to a folder. I chose to work with pygame. My project is about capturing an image as soon as a sensor has been triggered, so it needs to be very responsive.
According to the documentation for Pygame camera here it says:
open()
Opens the camera device, attempts to initialize it, and begins
recording images to a buffer. The camera must be started before any of
the below functions can be used.
get_image()
Pulls an image off of the buffer as an RGB Surface. It can optionally
reuse an existing Surface to save time. The bit-depth of the surface
is either 24 bits or the same as the optionally supplied Surface.
So for my case, the get_image() simply seems to pull out the first image captured after start() has been called. My question is, how can I reach the buffer with all captured images or how does the capturing actually work? I can't find a solution for capturing and saving a still image (at any time) after in between I call start() and stop() on the pygame camera. Since the start() function initiates during a few seconds, it is simply too slow to just call start(), get_image() and stop() after one another. Any help or suggestions would be appreciated.
See my python code below:
def activateCapturing:
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/dev/video0",(320,180))
cam.start()
pngdata = None
imageAsbytes = []
activated = True
while activated:
if readSensor():
img = cam.get_image()
pygame.image.save(img, "/tmp/image.png")
activated = False
with open("/tmp/image.png", 'rb') as f:
pngdata = f.read()
imageAsbytes = bytearray(pngdata)
cam.stop()
return imageAsbytes
Thanks in advance!
You simpy do not stop the camera after capturing one image.
See https://www.pygame.org/docs/tut/CameraIntro.html.
get_image() gets the image the camera currently sees from the buffer - the buffer NOT being "all pictures since start()" but simply the currently viewed image.
You use stop() after your 3s of "capturing window" to stop aquiering more images.
If you are after performance, you might want to scroll down that page and review the section about Capturing a Live Stream - if you did the same (w/o displaying the stream) and just saved 1 image to disk when needed you should get a decent framerate.
Api: get_image()

ROS Image subscriber lag

I am having some lag issues with a rospy subscriber listening to image messages.
Overview:
I have a rosbag streaming images to /camera/image_raw at 5Hz. I also have an image_view node for displaying the images for reference. This image_view shows them at 5Hz.
In my rospy subscriber (initialized with queue = 1), I also display the image (for comparing lag time against the image_view node). The subscriber subsequently does some heavy processing.
Expected result:
Since queue size is 1, the subscriber should process the latest frame, and skip all other frames in the meanwhile. Once it completes processing, it should then move on to the next latest frame. There should be no queuing of old frames. This would result in a choppy, but not laggy video (low fps, but no "delay" wrt rosbag stream, if that makes sense)
Actual result:
The subscriber is lagging behind the published stream. Specifically, the image_view node displays the images at 5Hz, and the subscriber seems to queue up all the images and processes them one by one, instead of just grabbing the latest image. The delay also grows over time. When I stop the rosbag stream, the subscriber continues to process images in the queue (even though queue = 1).
Note that if I change the subscriber to have a very large buffer size, as below, the expected behavior is produced:
self.subscriber = rospy.Subscriber("/camera/image_raw", Image, self.callback, queue_size = 1, buff_size=2**24)
However, this is not a clean solution.
This issue has also been reported in the following links, where I found the buffer size solution. The official explanation hypothesizes that the publisher may actually be slowing down, but this is not the case since the image_view subscriber displays the images at 5Hz.
https://github.com/ros/ros_comm/issues/536, Ros subscriber not up to date, http://answers.ros.org/question/50112/unexpected-delay-in-rospy-subscriber/
Any help is appreciated. Thanks!
Code:
def callback(self, msg):
print "Processing frame | Delay:%6.3f" % (rospy.Time.now() - msg.header.stamp).to_sec()
orig_image = self.bridge.imgmsg_to_cv2(msg, "rgb8")
if (self.is_image_show_on):
bgr_image = cv2.cvtColor(orig_image, cv2.COLOR_RGB2BGR)
cv2.imshow("Image window", bgr_image)
cv2.waitKey(1)
result = process(orig_image) #heavy processing task
print result
first of all if You Intend to Calculate Delay In Your topic Stream You Are going The Wrong Way since Both image_view node and cv_bridge+opencv use Lots of Resources To Show The images which cause some lags themselves.
After That Subscribing To An Image Topic From 2 Different Node is still unstable(In My Distro which is indigo) unless you change transport hint on publisher side
What i suggest to do is stop on of the subscriber nodes and check that the image is streaming correctly(in Your Code especially) and then use rostopic delay someTopic to show you the delay Your are getting. If the Problem Still presists You may change the transport_hint in publisher to UDP (not sure possible to do it in rosbag but in real driver quite possible)
When working with large size messages like images or point clouds, the right way to go is to use Nodelets.

Categories