Confirming an Image is Published to ROS - python

I've been trying to get an image to post to ROS (using Python/rospy), and while I think I have the method right, I'm having a hard time confirming it. Using
rosrun image_view image_view image:=(topic)
doesn't seem to show anything. I've also tried rqtbag, but I don't really know how that thing works, other than it doesn't show things published, anyways.
A few notes before pasting my current code:
The code I use right now is based off of code I have gotten to work previously. I've used a similar setup to post text images to ROS, and those output fairly reliably.
This is slightly modified here. Most of this code is part of an on_message function, since this all runs through MQTT when implemented. (The logic is acquire image on one system -> encode it -> transfer to other system -> decode -> publish to ROS.)
I'm using Python 2.7 on Ubuntu, and ROS Indigo.
Without further ado, my current code for publishing:
rospy.init_node('BringInAnImage', log_level = rospy.INFO)
def convert(messagepayload):
t = open('newpic.bmp','w')
t.write(messagepayload)
t.close()
def on_message(client, userdata, msg):
img = base64.b64decode(msg.payload)
convert(img)
time.sleep(5)
source = cv2.imread('newpic.bmp') #this should be a mat file
# talk to ROS
bridge = CvBridge()
pub2 = rospy.Publisher('/BringInAnImage', Image, queue_size = 10)
pub2.publish(bridge.cv2_to_imgmsg(source, "bgr8"))
print "uh..... done??"
I'm using a basic listening function to try and see what is going on (this is within a different script I execute in a separate terminal):
def listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber("/BringInAnImage", Image, callback)
rospy.spin()
if __name__ == '__main__':
listener()
The callback just prints out that the image was received.

How to check if something is published on topic xyz
To check if a message is really published, you can use the rostopic command.
Run the following in a terminal to print everything that is published on the specified topic. This is the easiest way to check if there is something published.
rostopic echo <topic_name>
See the ROS wiki for more useful things rostopic can do.
Why is the image not received by image_view?
While you are doing it basically right, your images will not be received by any subscriber for a not so obvious but fatal problem in your code: You are using the publisher (pub2) immediately after initializing it. Subscribers need some time to register to the new publisher and will not be ready before you publish the image (see also this answer).
➔ Do not initialize a publisher just before you need it but do it right in the beginning, when initializing the node.

Related

How to Automate terminal commands of ROS using a python script

I am currently trying to write an executable program in python that runs the following ROS command: rostopic echo dvrk/PSM1/position_cartesian_current however despite reading the ROS Tutorials, I am unsure how to go about doing this. Within a file called arm.py the following subscriber and definition already exist:
rospy.Subscriber(self.__full_ros_namespace + '/position_cartesian_current', PoseStamped, self.__position_cartesian_current_cb)
def __position_cartesian_current_cb(self, data):
self.__position_cartesian_current = posemath.fromMsg(data.pose)
Am I supposed to reuse this subscriber and definition in the new automated python script? After obtaining the current Cartesian position, the robot will subsequently be moved to a different position, which can currently be accomplished using ROS commands in the terminal, however, the aim is to write a python script that automates these commands. Any help would be greatly appreciated!
import rospy
from tf import transformations
from tf_conversions import posemath
from std_msgs.msg import String, Bool, Float32, Empty, Float64MultiArray
from geometry_msgs.msg import Pose, PoseStamped, Vector3, Quaternion, Wrench, WrenchStamped, TwistStamped
def callback(data):
rospy.loginfo(rospy.get_caller_id() + data.data)
def listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber(self.__full_ros_namespace + '/position_cartesian_current', PoseStamped, callback)
rospy.spin()
if __name__ == '__main__':
listener()
What you want to do is write a Python ROS Node to subscribe to the topic and implement your logic.
You can do so by following this guide.
The main idea is to subscribe to the position topic, get the relevant data in the callback function and publish, in that same callback, the commands you usually perform by command line.
To reproduce a simple rostopic echo, you can just print the values you receive in the callback.
This is not a Python automation script, it is a Python ROS program which can be executed as a ROS node (using rosrun). What you are accomplish by doing this is the same with publishing manual messages through ROS built-in terminal commands. This program will execute callback whenever a message is published at "/position_cartesian_current" topic.
If you do not want to execute bunch of commands at different terminals whenever you want to try something, ROS offers you the ability to create launch files (roslaunch) which reduces your execution pipeline to a single roslaunch command. This is heavily used in ROS community to automate processes.
According to this doc PoseStamped message contained from Pose and Header messages, so to obtain to the Pose values try it:
from geometry_msgs.msg import PoseStamped,
def callback(data):
print(data.pose)
def listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber(self.__full_ros_namespace + '/position_cartesian_current',
PoseStamped, callback
)
rospy.spin()

Message type websocket gdax (coinbase)

I am interested in getting real-time data using the Gdax (Coinbase) WebSocket. I'm a total noob so I am inspecting the example Gdax posted in their documentation:
import gdax, time
class myWebsocketClient(gdax.WebsocketClient):
def on_open(self):
self.url = "wss://ws-feed.gdax.com/"
self.products = ["LTC-USD"]
self.message_count = 0
print("Lets count the messages!")
def on_message(self, msg):
self.message_count += 1
if 'price' in msg and 'type' in msg:
print ("Message type:", msg["type"],
"\t# {}.3f".format(float(msg["price"])))
def on_close(self):
print("-- Goodbye! --")
wsClient = myWebsocketClient()
wsClient.start()
print(wsClient.url, wsClient.products)
while (wsClient.message_count < 500):
print ("\nmessage_count =", "{} \n".format(wsClient.message_count))
time.sleep(1)
wsClient.close()
The output is:
...
Message type: received # 50.78.3f
Message type: open # 50.78.3f
Message type: done # 51.56.3f
Message type: received # 51.59.3f
Message type: open # 51.59.3f
Message type: done # 51.51.3f
Message type: done # 51.17.3f
Message type: done # 51.66.3f
Kernel died, restarting
I have a few question regarding this code and output:
What does the message type (received, open, done, match) mean, which type is used for doing calculations, and why are some types skipped?
Why does running the code always ends in 'Kernel died, restarting'?
The documentation states that this code is for illustration only. Does that mean that this isn't a proper way of getting real-time data in order to do stuff with it?
If you know some good articles or books that can teach a noob how to work with WebSockets, I would love to hear about them!
1) see the full documentation for each message here
2) Everything I find related to that issue stems from environment set up. Whether that is library dependencies not being properly installed or other environmental factors.
3) It's a proper way to set up a connection to the WebSocket but they don't provide any error handling or other logic. It's usually to cover themselves legally and reduce expectations for the code they provide (i.e. when someone receives error's similar to this they aren't liable to fix, update, help, etc)
Python interpreter (3.6.2 64bit win) crashed on close() for me too.
Here is a fix (from https://github.com/danpaquin/gdax-python/issues/152):
client.stop = True
client.thread.join()
client.ws.close()
I just added these in the on_close method and no more crashes (so far).
ps: the issue linked say that it should be fixed in the latest gdax version but the latest pip gdax (1.0.6) still crashed.
There are rather a few pitfalls to be aware of in building a full, real time order book (level 3) that I cannot document here but you may be interested to learn GDAX now offer a level 2 (almost real-time) update which will send you the updated prices for the order book. Probably much simpler to implement.

Communication between two separate Python engines

The problem statement is as follows:
I am working with Abaqus, a program for analyzing mechanical problems. It is basically a standalone Python interpreter with its own objects etc. Within this program, I run a python script to set up my analysis (so this script can be modified). It also contains a method which has to be executed when an external signal is received. These signals come from the main script that I am running in my own Python engine.
For now, I have the following workflow:
The main script sets a boolean to True when the Abaqus script has to execute a specific function, and pickles this boolean into a file. The Abaqus script regularly checks this file to see whether the boolean has been set to true. If so, it does an analysis and pickles the output, so that the main script can read this output and act on it.
I am looking for a more efficient way to signal the other process to start the analysis, since there is a lot of unnecessary checking going on right know. Data exchange via pickle is not an issue for me, but a more efficient solution is certainly welcome.
Search results always give me solutions with subprocess or the like, which is for two processes started within the same interpreter. I have also looked at ZeroMQ since this is supposed to achieve things like this, but I think this is overkill and would like a solution in python. Both interpreters are running python 2.7 (although different versions)
Edit:
Like #MattP, I'll add this statement of my understanding:
Background
I believe that you are running a product called abaqus. The abaqus product includes a linked-in python interpreter that you can access somehow (possibly by running abaqus python foo.py on the command line).
You also have a separate python installation, on the same machine. You are developing code, possibly including numpy/scipy, to run on that python installation.
These two installations are different: they have different binary interpreters, different libraries, different install paths, etc. But they live on the same physical host.
Your objective is to enable the "plain python" programs, written by you, to communicate with one or more scripts running in the "Abaqus python" environment, so that those scripts can perform work inside the Abaqus system, and return results.
Solution
Here is a socket based solution. There are two parts, abqlistener.py and abqclient.py. This approach has the advantage that it uses a well-defined mechanism for "waiting for work." No polling of files, etc. And it is a "hard" API. You can connect to a listener process from a process on the same machine, running the same version of python, or from a different machine, or from a different version of python, or from ruby or C or perl or even COBOL. It allows you to put a real "air gap" into your system, so you can develop the two parts with minimal coupling.
The server part is abqlistener. The intent is that you would copy some of this code into your Abaqus script. The abq process would then become a server, listening for connections on a specific port number, and doing work in response. Sending back a reply, or not. Et cetera.
I am not sure if you need to do setup work for each job. If so, that would have to be part of the connection. This would just start ABQ, listen on a port (forever), and deal with requests. Any job-specific setup would have to be part of the work process. (Maybe send in a parameter string, or the name of a config file, or whatever.)
The client part is abqclient. This could be moved into a module, or just copy/pasted into your existing non-ABQ program code. Basically, you open a connection to the right host:port combination, and you're talking to the server. Send in some data, get some data back, etc.
This stuff is mostly scraped from example code on-line. So it should look real familiar if you start digging into anything.
Here's abqlistener.py:
# The below usage example is completely bogus. I don't have abaqus, so
# I'm just running python2.7 abqlistener.py [options]
usage = """
abacus python abqlistener.py [--host 127.0.0.1 | --host mypc.example.com ] \\
[ --port 2525 ]
Sets up a socket listener on the host interface specified (default: all
interfaces), on the given port number (default: 2525). When a connection
is made to the socket, begins processing data.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import SocketServer
import json
class AbqRequestHandler(SocketServer.BaseRequestHandler):
"""Request handler for our socket server.
This class is instantiated whenever a new connection is made, and
must override `handle(self)` in order to handle communicating with
the client.
"""
def do_work(self, data):
"Do some work here. Call abaqus, whatever."
print "DO_WORK: Doing work with data!"
print data
return { 'desc': 'low-precision natural constants','pi': 3, 'e': 3 }
def handle(self):
# Allow the client to send a 1kb message (file path?)
self.data = self.request.recv(1024).strip()
print "SERVER: {} wrote:".format(self.client_address[0])
print self.data
result = self.do_work(self.data)
self.response = json.dumps(result)
print "SERVER: response to {}:".format(self.client_address[0])
print self.response
self.request.sendall(self.response)
if __name__ == '__main__':
print args
server = SocketServer.TCPServer((args.host, args.port), AbqRequestHandler)
print "Server starting. Press Ctrl+C to interrupt..."
server.serve_forever()
And here's abqclient.py:
usage = """
python2.7 abqclient.py [--host HOST] [--port PORT]
Connect to abqlistener on HOST:PORT, send a message, wait for reply.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import json
import socket
message = "I get all the best code from stackoverflow!"
print "CLIENT: Creating socket..."
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print "CLIENT: Connecting to {}:{}.".format(args.host, args.port)
s.connect((args.host, args.port))
print "CLIENT: Sending message:", message
s.send(message)
print "CLIENT: Waiting for reply..."
data = s.recv(1024)
print "CLIENT: Got response:"
print json.loads(data)
print "CLIENT: Closing socket..."
s.close()
And here's what they print when I run them together:
$ python2.7 abqlistener.py --port 3434 &
[2] 44088
$ Namespace(host='', port=3434)
Server starting. Press Ctrl+C to interrupt...
$ python2.7 abqclient.py --port 3434
CLIENT: Creating socket...
CLIENT: Connecting to :3434.
CLIENT: Sending message: I get all the best code from stackoverflow!
CLIENT: Waiting for reply...
SERVER: 127.0.0.1 wrote:
I get all the best code from stackoverflow!
DO_WORK: Doing work with data!
I get all the best code from stackoverflow!
SERVER: response to 127.0.0.1:
{"pi": 3, "e": 3, "desc": "low-precision natural constants"}
CLIENT: Got response:
{u'pi': 3, u'e': 3, u'desc': u'low-precision natural constants'}
CLIENT: Closing socket...
References:
argparse, SocketServer, json, socket are all "standard" Python libraries.
To be clear, my understanding is that you are running Abaqus/CAE via a Python script as an independent process (let's call it abq.py), which checks for, opens, and reads a trigger file to determine if it should run an analysis. The trigger file is created by a second Python process (let's call it main.py). Finally, main.py waits to read the output file created by abq.py. You want a more efficient way to signal abq.py to run an analysis, and you're open to different techniques to exchange data.
As you mentioned, subprocess or multiprocessing might be an option. However, I think a simpler solution is to combine your two scripts, and optionally use a callback function to monitor the solution and process your output. I'll assume there is no need to have abq.py constantly running as a separate process, and that all analyses can be started from main.py whenever it is appropriate.
Let main.py have access to the Abaqus Mdb. If it's already built, you open it with:
mdb = openMdb(FileName)
A trigger file is not needed if main.py starts all analyses. For example:
if SomeCondition:
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
j.waitForCompletion()
Once complete, main.py can read the output file and continue. This is straightforward if the data file was generated by the analysis itself (e.g. .dat or .odb files). OTH, if the output file is generated by some code in your current abq.py, then you can probably just include it in main.py instead.
If that doesn't provide enough control, instead of the waitForCompletion method you can add a callback function to the monitorManager object (which is automatically created when you import the abaqus module: from abaqus import *). This allows you to monitor and respond to various messages from the solver, such as COMPLETED, ITERATION, etc. The callback function is defined like:
def onMessage(jobName, messageType, data, userData):
if messageType == COMPLETED:
# do stuff
else:
# other stuff
Which is then added to the monitorManager and the job is called :
monitorManager.addMessageCallback(jobName=MyJobName,
messageType=ANY_MESSAGE_TYPE, callback=onMessage, userData=MyDataObj)
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
One of the benefits to this approach is that you can pass in a Python object as the userData argument. This could potentially be your output file, or some other data container. You could probably figure out how to process the output data within the callback function - for example, access the Odb and get the data, then do any manipulations as needed without needing the external file at all.
I agree with the answer, except for some minor syntax problems.
defining instance variables inside the handler is a no no. not to mention they are not being defined in any sort of init() method. Subclass TCPServer and define your instance variables in TCPServer.init(). Everything else will work the same.

Gstreamer message to signal new frame from video source (webcam)

I am trying to save a stream from webcam as series of image using gstreamer. I have written this code so far...
#!/usr/bin/python
import sys, os
import pygtk, gtk, gobject
import pygst
pygst.require("0.10")
import gst
def __init__(self):
#....
# Code to create a gtk Window
#....
self.player = gst.Pipeline("player")
source = gst.element_factory_make("v4l2src", "video-source")
sink = gst.element_factory_make("xvimagesink", "video-output")
caps = gst.Caps("video/x-raw-yuv, width=640, height=480")
filter = gst.element_factory_make("capsfilter", "filter")
filter.set_property("caps", caps)
self.player.add(source, filter, sink)
gst.element_link_many(source, filter, sink)
After this, I am trying to create a signal over the bus to listen for any message from the source or the sink to indicate a new frame has been sent or received, so that it can be saved.
bus = self.player.get_bus()
bus.add_signal_watch()
bus.connect("message::any", self.save_file,"Save file")
where save_file is my callback, where I want to save the file.
def save_file(self, bus, msg):
print "SAVED A NEW FILE"
I have two questions,
How do I invoke this callback. The message::any is not working.
When this message is invoked, how do I get access to the image buffer.
UPDATE (4-12-2012):
Couple of links for reference
A python interface for v4l. But it has not been working for me. It seems to crash when i try to grab on 12.04 Ubuntu.
http://code.google.com/p/python-video4linux2/
A webcam viewer code for those interested. But this is not what I want since it uses gst-launch and does not provide the level of pipeline control I want to have. http://pygstdocs.berlios.de/pygst-tutorial/webcam-viewer.html
Gstreamer Bus is not intended to be used for this purpose. Messages that are put there signal rather some special event like end-of-stream, element state change and so on. Buffers (images) flowing through elements usualy don't generate any messages on the bus.
You may consider several possibilities:
make "tee" element before videosink and connect "multifilesink" in parallel to videosink (you may want to see some image encoders like pngenc or jpegenc and put one of them before multifilesink")
like before, but use "appsink" that allow you to handle buffers and do whatever-you-want with them
if you want to switch dumping on and off, consider using "valve" element
You may want to set "sync" property to false on your additional sink (Which cause buffers to be dumped as soon as possible without syncing to clock). Consider also adding some queues after tee (without this deadlock may occur during ready->paused transition).
I am not sure if my response after years will be useful for you. but hope, it will be useful for others.
to receive a message that you have received a buffer, you can use gstreamer probes.
It could be something similar:
def make_pipeline(self):
CLI2 = [
'v4l2src ! video/x-raw,format=RGB,width=640,height=480,framerate=30/1 ! ',
'videoconvert ! x264enc bitrate=128 ! mpegtsmux name="mux" ! hlssink name="sink"',
]
gcmd = ''.join(CLI2)
self.pipeline = Gst.parse_launch(gcmd)
self.hlssink = self.pipeline.get_by_name("sink")
self.hlssink.set_property("target-duration",2)
self.hlssink_pad = self.hlssink.get_static_pad("sink")
probe_id = self.hlssink_pad.add_probe(Gst.PadProbeType.EVENT_UPSTREAM,probe_callback)
and then, the probe callback function could be:
def probe_callback(hlssink_pad,info):
info_event = info.get_event()
info_structure = info_event.get_structure()
do_something_with_this_info
return Gst.PadProbeReturn.PASS
So, every time there is an event on either a source pad or sink pad, the probe callback function will be called in the main thread.
Hope this helps!

Extra Characthers showing up after peeking at an MSMQ message

I am in the process of upgrading an older legacy system that is using Biztalk, MSMQs, Java, and python.
Currently, I am trying to upgrade a particular piece of the project which when complete will allow me to begin an in-place replacement of many of the legacy systems.
What I have done so far is recreate the legacy system in a newer version of Biztalk (2010) and on a machine that isn't on its last legs.
Anyway, the problem I am having is that there is a piece of Python code that picks up a message from an MSMQ and places it on another server. This code has been in place on our legacy system since 2004 and has worked since then. As far as I know, has never been changed.
Now when I rebuilt this, I started getting errors in the remote server and, after checking a few things out and eliminating many possible problems, I have established that the error occurs somewhere around the time the Python code is picking up from the MSMQ.
The error can be created using just two messages. Please note that I am using sample XMls here as the actual ones are pretty long.
Message one:
<xml>
<field1>Text 1</field1>
<field2>Text 2</field2>
</xml>
Message two:
<xml>
<field1>Text 1</field1>
</xml>
Now if I submit message one followed by message two to the MSMQ, they both appear correctly on the queue. If I then call the Python script, message one is returned correctly but message two gains extra characters.
Post-Python message two:
<xml>
<field1>Text 1</field1>
</xml>1>Te
I thought at first that there might have been scoping problems within the Python code but I have gone through that as well as I can and found none. However, I must admit that the first time that I've looked seriously at Python code is this project.
The Python code first peeks at a message and then receives it. I have been able to see the message when the script peeks and it has the same error message as when it receives.
Also, this error only shows up when going from a longer message to a shorter message.
I would welcome any suggestions of things that might be wrong, or things I could do to identify the problem.
I have googled and searched and gone a little crazy. This is holding an entire project up, as we can't begin replacing the older systems with this piece in place to act as a new bridge.
Thanks for taking the time to read through my problem.
Edit: Here's the relevant Python code:
import sys
import pythoncom
from win32com.client import gencache
msmq = gencache.EnsureModule('{D7D6E071-DCCD-11D0-AA4B-0060970DEBAE}', 0, 1, 0)
def Peek(queue):
qi = msmq.MSMQQueueInfo()
qi.PathName = queue
myq = qi.Open(msmq.constants.MQ_PEEK_ACCESS,0)
if myq.IsOpen:
# Don't loose this pythoncom.Empty thing (it took a while)
tmp = myq.Peek(pythoncom.Empty, pythoncom.Empty, 1)
myq.Close()
return tmp
The function calls this piece of code. I don't have access to the code that calls this until Monday, but the call is basically:
msg= MSMQ.peek()
2nd Edit.
I am attaching the first half of the script. this basically loops around
import base64, xmlrpclib, time
import MSMQ, Config, Logger
import XmlRpcExt,os,whrandom
QueueDetails = Config.InQueueDetails
sleeptime = Config.SleepTime
XMLRPCServer = Config.XMLRPCServer
usingBase64 = Config.base64ing
version=Config.version
verbose=Config.verbose
LogO = Logger.Logger()
def MSMQToIAMS():
# moved svr cons out of daemon loop
LogO.LogP(version)
svr = xmlrpclib.Server(XMLRPCServer, XmlRpcExt.getXmlRpcTransport())
while 1:
GotOne = 0
for qd in QueueDetails:
queue, agency, messagetype = qd
#LogO.LogD('['+version+"] Searching queue %s for messages"%queue)
try:
msg=MSMQ.Peek(queue)
except Exception,e:
LogO.LogE("Peeking at \"%s\" : %s"%(queue, e))
continue
if msg:
try:
msg = msg.__call__().encode('utf-8')
except:
LogO.LogE("Could not convert massege on \"%s\" to a string, leaving it on queue"%queue)
continue
if verbose:
print "++++++++++++++++++++++++++++++++++++++++"
print msg
print "++++++++++++++++++++++++++++++++++++++++"
LogO.LogP("Found Message on \"%s\" : \"%s...\""%(queue, msg[:40]))
try:
rv = svr.accept(msg, agency, messagetype)
if rv[0] != "OK":
raise Exception, rv[0]
LogO.LogP('Message has been sent successfully to IAMS from %s'%queue)
MSMQ.Receive(queue)
GotOne = 1
StoreMsg(msg)
except Exception, e:
LogO.LogE("%s"%e)
if GotOne == 0:
time.sleep(sleeptime)
else:
gotOne = 0
This is the full code that calls MSMQ. Creates a little program that watches MSMQ and when a message arrives picks it up and sends it off to another server.
Sounds really Python-specific (of which I know nothing) rather then MSMQ-specific. Isn't this just a case of a memory variable being used twice without being cleared in between? The second message is shorter than the first so there are characters from the first not being overwritten. What do the relevant parts of the Python code look like?
[[21st April]]
The code just shows you are populating the tmp variable with a message. What happens to tmp before the next message is accessed? I'm assuming it is not cleared.

Categories