ROS Image subscriber lag - python

I am having some lag issues with a rospy subscriber listening to image messages.
Overview:
I have a rosbag streaming images to /camera/image_raw at 5Hz. I also have an image_view node for displaying the images for reference. This image_view shows them at 5Hz.
In my rospy subscriber (initialized with queue = 1), I also display the image (for comparing lag time against the image_view node). The subscriber subsequently does some heavy processing.
Expected result:
Since queue size is 1, the subscriber should process the latest frame, and skip all other frames in the meanwhile. Once it completes processing, it should then move on to the next latest frame. There should be no queuing of old frames. This would result in a choppy, but not laggy video (low fps, but no "delay" wrt rosbag stream, if that makes sense)
Actual result:
The subscriber is lagging behind the published stream. Specifically, the image_view node displays the images at 5Hz, and the subscriber seems to queue up all the images and processes them one by one, instead of just grabbing the latest image. The delay also grows over time. When I stop the rosbag stream, the subscriber continues to process images in the queue (even though queue = 1).
Note that if I change the subscriber to have a very large buffer size, as below, the expected behavior is produced:
self.subscriber = rospy.Subscriber("/camera/image_raw", Image, self.callback, queue_size = 1, buff_size=2**24)
However, this is not a clean solution.
This issue has also been reported in the following links, where I found the buffer size solution. The official explanation hypothesizes that the publisher may actually be slowing down, but this is not the case since the image_view subscriber displays the images at 5Hz.
https://github.com/ros/ros_comm/issues/536, Ros subscriber not up to date, http://answers.ros.org/question/50112/unexpected-delay-in-rospy-subscriber/
Any help is appreciated. Thanks!
Code:
def callback(self, msg):
print "Processing frame | Delay:%6.3f" % (rospy.Time.now() - msg.header.stamp).to_sec()
orig_image = self.bridge.imgmsg_to_cv2(msg, "rgb8")
if (self.is_image_show_on):
bgr_image = cv2.cvtColor(orig_image, cv2.COLOR_RGB2BGR)
cv2.imshow("Image window", bgr_image)
cv2.waitKey(1)
result = process(orig_image) #heavy processing task
print result

first of all if You Intend to Calculate Delay In Your topic Stream You Are going The Wrong Way since Both image_view node and cv_bridge+opencv use Lots of Resources To Show The images which cause some lags themselves.
After That Subscribing To An Image Topic From 2 Different Node is still unstable(In My Distro which is indigo) unless you change transport hint on publisher side
What i suggest to do is stop on of the subscriber nodes and check that the image is streaming correctly(in Your Code especially) and then use rostopic delay someTopic to show you the delay Your are getting. If the Problem Still presists You may change the transport_hint in publisher to UDP (not sure possible to do it in rosbag but in real driver quite possible)

When working with large size messages like images or point clouds, the right way to go is to use Nodelets.

Related

How image capturing with Pygame actually works?

My goal is to set up a webcam with Raspberry Pi to be active during a certain time (let's say 2 minutes). During this time I should be able to capture a still image at any time and save that image to a folder. I chose to work with pygame. My project is about capturing an image as soon as a sensor has been triggered, so it needs to be very responsive.
According to the documentation for Pygame camera here it says:
open()
Opens the camera device, attempts to initialize it, and begins
recording images to a buffer. The camera must be started before any of
the below functions can be used.
get_image()
Pulls an image off of the buffer as an RGB Surface. It can optionally
reuse an existing Surface to save time. The bit-depth of the surface
is either 24 bits or the same as the optionally supplied Surface.
So for my case, the get_image() simply seems to pull out the first image captured after start() has been called. My question is, how can I reach the buffer with all captured images or how does the capturing actually work? I can't find a solution for capturing and saving a still image (at any time) after in between I call start() and stop() on the pygame camera. Since the start() function initiates during a few seconds, it is simply too slow to just call start(), get_image() and stop() after one another. Any help or suggestions would be appreciated.
See my python code below:
def activateCapturing:
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/dev/video0",(320,180))
cam.start()
pngdata = None
imageAsbytes = []
activated = True
while activated:
if readSensor():
img = cam.get_image()
pygame.image.save(img, "/tmp/image.png")
activated = False
with open("/tmp/image.png", 'rb') as f:
pngdata = f.read()
imageAsbytes = bytearray(pngdata)
cam.stop()
return imageAsbytes
Thanks in advance!
You simpy do not stop the camera after capturing one image.
See https://www.pygame.org/docs/tut/CameraIntro.html.
get_image() gets the image the camera currently sees from the buffer - the buffer NOT being "all pictures since start()" but simply the currently viewed image.
You use stop() after your 3s of "capturing window" to stop aquiering more images.
If you are after performance, you might want to scroll down that page and review the section about Capturing a Live Stream - if you did the same (w/o displaying the stream) and just saved 1 image to disk when needed you should get a decent framerate.
Api: get_image()

How to implement triggers in Python script

I have a psychopy script for a psychophysical experiment in which participants see different shapes and have to react in a specific way. I would like to take physiological measures (ECG) on a different computer and implement event points in my measures. So whenever the participant is shown a stimulus, I would like this to show up on the ECG.
Basically, I would like to add commands for parallel port i/o.
I don't even know where to start, to be honest. I have read quite a bit about this, but I'm still lost.
Here is probably the shortest fully working code to show a shape and send a trigger:
from psychopy import visual, parallel, core
win = visual.Window()
shape = visual.ShapeStim(win)
# Show it
shape.draw()
win.flip()
# Immediately send trigger
parallel.setData(10) # Trigger code, here 10
core.wait(0.020) # Number of seconds to send this trigger code (enough for the hardware to send and the receiver to recognize it)
parallel.setData(0) # Stop sending trigger.
You could, of course, extend this by putting the stimulus presentation and trigger in a loop (running trials), and do various other things alongside it, e.g. collecting responses and saving data. This is just a minimal example of stimulus presentation and sending a trigger.
It is very much on purpose that the trigger code is located immediately after flipping the window. On most hardware, sending a trigger is very fast (voltage on the port changes within 1 ms of the line being run in the script) whereas the monitor only updates its image around every 16.7 ms. and win.flip() waits for the latter. I've made some examples of good and bad practices for timing triggers here.
For parallel port io, you should use pyparallel. Make continuous ECG measurements, store those and store the timestamp and whatever metadata you want per stimulus. Pseudo code to get you started:
while True:
store_ecg()
if time_to_show():
stimulus()
time.sleep(0.1) # 10 Hz

OpenCV-Python: How to get latest frame from the live video stream or skip old ones

I've integrated an IP camera with OpenCV in Python to get the video processing done frame by frame from the live stream. I've configured camera FPS as 1 second so that I can get 1 frame per second in the buffer to process, but my algorithm takes 4 seconds to process each frame, causing stagnation of unprocessed frame in the buffer, that keeps growing by time & causing exponentially delay. To sort this out, I have created one more Thread where I'm calling cv2.grab() API to clean the buffer, it moves pointer towards latest frame in each call. In main Thread, I'm calling retrieve() method which gives me the last frame grabbed by the first Thread. By this design, frame stagnation problem is fixed and exponential delay is removed, but still constant delay of 12-13 seconds could not be removed. I suspect when cv2.retrieve() is called its not getting the latest frame, but the 4th or 5th frame from the latest frame. Is there any API in OpenCV or any other design pattern to get this problem fixed so that I can get the latest frame to process.
If you don't mind compromising on speed.
you can create a python generator which opens camera and returns frame.
def ReadCamera(Camera):
while True:
cap = cv2.VideoCapture(Camera)
(grabbed, frame) = cap.read()
if grabbed == True:
yield frame
Now when you want to process the frame.
for frame in ReadCamera(Camera):
.....
This works perfectly fine. except opening and closing camera will add up to time.

Asynchronous image load in pygame

I am writing a small application in python 2.7 using pygame in which I would like to smoothly display animated tiles on screen. I am writing on Ubuntu, but target platform is Raspberry Pi if that's relevant. The challenge is that the textures for these animated tiles are stored on a web server and are to be loaded dynamically over time, not all at once. I'd like to be able to load these images into pygame with no noticeable hitch in my animation or input response. The load frequency is pretty low, like grabbing a couple jpgs every 30 seconds. I am willing to wait a long time for the image to load in the background if it means the main input/animation thread remains unhitched.
So using the multiprocessing module, I am able to download images from a server asynchronously into a buffer, and then pass this buffer to my main pygame process over a multiprocessing.queues.SimpleQueue object. However, once the buffer is accessible in the pygame process, there is still a hitch in my application while the buffer is loaded into a Surface for blitting via pygame.image.frombuffer().
Is there a way to make this pygame.image.load() call asynchronous so that my animation in the game, etc is not blocked? I can't think of an obvious solution due to GIL.
If I was writing a regular OpenGL program in C, I would be able to asynchronously write data to the GPU using a pixel buffer object, correct? Does pygame expose any part of this API by any chance? I can't seem to find it in the pygame docs, which I am pretty new to, so pardon me if the answer is obvious. Any help pointing out how pygame's terminology translates to OpenGL API would be a big help, as well any relevant examples in which pygame can initialize a texture asynchronously would be amazing!
If pygame doesn't offer this functionality, what are my options? Is there a way to do this is with PySDL2?
EDIT: Ok, so I tried using pygame.image.frombuffer, and it doesn't really cut down on the hitching I am seeing. Any ideas of how I can make this image load truly asynchronous? Here is some code snippets illustrating what I am currently doing.
Here's the async code I have that sits in a separate process from pygame
def _worker(in_queue, out_queue):
done = False
while not done:
if not in_queue.empty():
obj = in_queue.get()
# if a bool is passed down the queue, set the done flag
if isinstance(obj, bool):
done = obj
else:
url, w, h = obj
# grab jpg at given url. It is compressed so we use PIL to parse it
jpg_encoded_str = urllib2.urlopen(url).read()
# PIL.ImageFile
parser = ImageFile.Parser()
parser.feed(jpg_encoded_str)
pil_image = parser.close()
buff = pil_image.tostring()
# place decompressed buffer into queue for consumption by main thread
out_queue.put((url, w, h, buff))
# and so I create a subprocess that runs _worker function
Here's my update loop that runs in the main thread. It looks to see if the _Worker process has put anything into the out_queue, and if so loads it into pygame:
def update():
if not out_queue.empty():
url, w, h, buff = img_buffer_queue.get()
# This call is where I get a hitch in my application
image = pygame.image.frombuffer(buff, (w, h), "RGB")
# Place loaded image on the pygame.Sprite, etc in the callback
callback = on_load_callbacks.pop(url, None)
if callback:
callback(image, w, h)
Perhaps you could try converting the image from a buffer object to a more efficient object in the worker function, perhaps a surface, and then send that computed surface to the out queue?
This way, generation of the surface object won't block the main thread.

Problem with Python and IP camera

I'm having problems to get a video stream from a IP camera I have. I'm using opencv to get the images from it. Here's the code i have:
import sys
import cv
video="http://prot-on.dyndns.org:8080/video2.mjpeg"
capture =cv.CaptureFromFile(video)
cv.NamedWindow('Video Stream', 1 )
while True:
# capture the current frame
frame = cv.QueryFrame(capture)
if frame is None:
break
else:
#detect(frame)
cv.ShowImage('Video Stream', frame)
if k == 0x1b: # ESC
print 'ESC pressed. Exiting ...'
break
Actually, this thing works, but it takes too much time to display the images. I'm guessing it's because of this error from ffmpeg.
[mjpeg # 0x8cd0940]max_analyze_duration reached
[mjpeg # 0x8cd0940]Estimating duration from bitrate, this may be inaccurate
I'm not a python expert so any help would be appreciated!
First, mjpeg is a relatively simple video format. If you read your IP camera's manual, it's very like that you can find how to display the video in browser with a bit JavaScript code. In fact, if you open link of http://prot-on.dyndns.org:8080/video2.mjpeg in Google Chrome, you would see the video without any problem. (Maybe you shouldn't leave the real URL of your camera)
Second, as far as I can see, the frame rate of your camera is pretty slow. That might due to Internet latency or your camera's setting. Compare what you see in Chrome with the video displayed by your code, if they are of same quality, then it's not your code's problem.

Categories