I am currently trying to make a ROS node in Python which has both a subscriber and a publisher.
I've seen examples where a message is published within the callback, but I want it to "constantly" publish messages, and perform callbacks when it is the case.
Here is how I do it now:
#!/usr/bin/env python
import rospy
from std_msgs.msg import Empty
from std_msgs.msg import String
import numpy as np
pub = rospy.Publisher('/status', String, queue_size=1000)
def callback(data):
print "Message received"
def listener():
rospy.init_node('control', anonymous=True)
rospy.Subscriber('control_c', Empty, callback)
rospy.spin()
if __name__ == '__main__':
print "Running"
listener()
So where should I publish?
Well, I think there's a lot of solutions here, you could even make use of a python process, but what I'm proposing is a ROS approach using a ros Timer.
I am not really that efficient in python but this code may gave you a heads up.
#!/usr/bin/env python
import rospy
from std_msgs.msg import Empty
from std_msgs.msg import String
import numpy as np
last_data = ""
started = False
pub = rospy.Publisher('/status', String, queue_size=1000)
def callback(data):
print "New message received"
global started, last_data
last_data = data
if (not started):
started = True
def timer_callback(event):
global started, pub, last_data
if (started):
pub.publish(last_data)
print "Last message published"
def listener():
rospy.init_node('control', anonymous=True)
rospy.Subscriber('control_c', String, callback)
timer = rospy.Timer(rospy.Duration(0.5), timer_callback)
rospy.spin()
timer.shutdown()
if __name__ == '__main__':
print "Running"
listener()
Here, your callback will update the message and your timer will fire up every 0.5sec and publishes the last data received.
you can test this code by publishing data on "/contriol_c" every 3 seconds and configuring you timer to 0.5 sec. start an echo on /status
$ rostopic echo /status
and you'll got your message published on a 2 Hz rate.
Hope that helps !
Simply replace rospy.spin() with the following loop:
while not rospy.is_shutdown():
# do whatever you want here
pub.publish(foo)
rospy.sleep(1) # sleep for one second
Of course you can adjust the sleep duration to whatever value you want (or even remove it entirely).
According to this reference subscribers in rospy are running in a separate thread, so you don't need to call spin actively.
Note that in roscpp (i.e. when using C++) this is handled differently. There you have to call ros::spinOnce() in the while loop.
Related
I am still struggling with using data from the callback function, I wrote a class and trying to get the data from the callback function associated with the class, any help would be much appreciated. Do I need to use multi threading or there is an easy way to use it ? When I am calling the class the publisher initializes and then callback keep getting the updated data but I am not sure how can I use this data.
#!/usr/bin/env python3
"""OpenCV feature detectors with ros CompressedImage Topics in python.
This example subscribes to a ros topic containing sensor_msgs
CompressedImage. It converts the CompressedImage into a numpy.ndarray,
then detects and marks features in that image. It finally displays
and publishes the new image - again as CompressedImage topic.
"""
__version__= '0.1'
from moveit_commander.conversions import pose_to_list
from rospy_tutorials.msg import Floats
from rospy.numpy_msg import numpy_msg
from tf import TransformListener
from std_msgs.msg import String
import geometry_msgs.msg
import moveit_commander
import moveit_msgs.msg
from math import pi
import sys, time
import rospy
import copy
VERBOSE=False
class move_xarm:
# global my_data
def __init__(self):
'''Initialize ros publisher and subscriber'''
# publish trajectories for RViz to visualize
self.display_trajectory_publisher = rospy.Publisher('/move_group/display_planned_path',moveit_msgs.msg.DisplayTrajectory,queue_size=20)
self.data_callback_publisher = rospy.Publisher('/callback_data',numpy_msg(Floats),queue_size=1)
self.my_data = None
self.listener()
# subscribed Topic
# self.subscriber = rospy.Subscriber("marker_wrt_base_pose", numpy_msg(Floats), self.callback, queue_size = 1)
if VERBOSE :
print("subscribed to marker_wrt_base_pose")
def listener(self):
# In ROS, nodes are uniquely named. If two nodes with the same
# name are launched, the previous one is kicked off. The
# anonymous=True flag means that rospy will choose a unique
# name for our 'listener' node so that multiple listeners can
# run simultaneously.
rospy.init_node('move_group_python_interface_tutorial', anonymous=True)
rospy.Subscriber("marker_wrt_base_pose", numpy_msg(Floats), self.callback,queue_size = 1)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
def callback(self, ros_data):
'''Callback function of subscribed topic.
Here position data get converted into float64'''
if VERBOSE :
print('received data of type: "%s"' % ros_data.format)
self.my_data = ros_data.data
# my_data = my_data.astype('float64')
self.data_callback_publisher.publish(ros_data)
if __name__ == '__main__':
mv = move_xarm()
I resolved this issue,
Create another function out of the class say name use_data(args)
Call use_data in main function and you are good to go.
One of the common problem is use of spin command,they block the main thread from exiting until ROS invokes a shutdown - via a Ctrl + C.
def use_data(args):
ic = move_xarm()
rospy.init_node('move_group_python_interface_tutorial',anonymous=True)
rate = rospy.Rate(5) # ROS Rate at 5Hz
while not rospy.is_shutdown():
do stuff using the variable ic.my_data
rate.sleep()
I am working on a project, where I have to read values from serial port and display them on tkinter GUI. I am using continous threading module of python. I am using a continous thread to read the data on serial port continously after every 0.5s, but now i want to stop this continous thread. So how should I stop it ?
This is the function which I am calling when a checkbutton is presssed
def scan():
print("in scan")
btn1_state = var1.get()
print("Scan: %d"%btn1_state)
t1 = continuous_threading.PeriodicThread(0.5, readserial)
if(btn1_state == 1):
t1.start()
else:
print("entered else ")
t1.stop() #I am using stop() but the thread doesn't stop
Please Help
The problem is likely that you are using a blocking read function in your readserial function. It needs a timeout. I can reproduce with this code:
import time
import continuous_threading
time_list = []
def save_time():
while True:
time.sleep(1)
time_list.append(time.time())
th = continuous_threading.PeriodicThread(0.5, save_time)
th.start()
time.sleep(4)
th.join()
print(time_list)
This never exits.
Modified from the examples.
Since continuous_threading expects it's event loop to be in control, it never gets to the stop event.
my problem is, how can i implement threadings to my program, where i have communication BLE with Rpi3.
My program works great, but response is too slow.
Please help with this. Thanks.
BMS_reading:
import gatt
import sys
import time
import threading
class AnyDevice(gatt.Device):
def write(self, characteristic):
self.response=bytearray()
self.bms_write_characteristic.write_value(bytes([0xDD,0xA5,0x03,0x00,0xFF,0xFD,0x77]));
def services_resolved(self):
super().services_resolved()
device_information_service = next(
s for s in self.services
if s.uuid == '0000ff00-0000-1000-8000-00805f9b34fb')
self.bms_read_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff01-0000-1000-8000-00805f9b34fb')
self.bms_write_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff02-0000-1000-8000-00805f9b34fb')
self.bms_read_characteristic.enable_notifications()
self.write(self.bms_read_characteristic)
def characteristic_value_updated(self, characteristic, value):
self.value=value
def write():
self.response+=self.value
if (self.response.endswith(b'w')):
self.response=self.response[4:]
self.SoC=int.from_bytes(self.response[19:20], byteorder = 'big')
self.manager.stop()
write()
#reading loop (I want add threading and read info "SoC")
while True:
address="A4:C1:38:A0:59:EB"
manager = gatt.DeviceManager(adapter_name='hci0')
device = AnyDevice(mac_address=address, manager=manager)
device.connect()
manager.run()
print("Capacity is: "+str(device.SoC)+"%")
TERMINAL <<< Capacity is: 76%
#long delay which i dont want
<<< Capacity is: 76%
I dont know how can i make it.
when i make thread all while loop, the communication does not have time to react and prints bad numbers or errors.
Please help.
--------------------EDITED--PROGRAM--FOR--NOTIFICATION------UPDATE----------
import gatt
import json
import sys
#from gi.repository import GLib
manager = gatt.DeviceManager(adapter_name='hci0')
class AnyDevice(gatt.Device):
def connect_succeeded(self):
super().connect_succeeded()
print("[%s] Připojeno" % (self.mac_address))
def connect_failed(self, error):
super().connect_failed(error)
print("[%s] Connection failed: %s" % (self.mac_address, str(error)))
def disconnect_succeeded(self):
super().disconnect_succeeded()
print("[%s] Disconnected" % (self.mac_address))
self.manager.stop()
def services_resolved(self):
super().services_resolved()
device_information_service = next(
s for s in self.services
if s.uuid == '0000ff00-0000-1000-8000-00805f9b34fb')
self.bms_read_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff01-0000-1000-8000-00805f9b34fb')
self.bms_write_characteristic = next(
c for c in device_information_service.characteristics
if c.uuid == '0000ff02-0000-1000-8000-00805f9b34fb')
print("BMS found")
self.bms_read_characteristic.enable_notifications()
def characteristic_enable_notifications_succeeded(self, characteristic):
super().characteristic_enable_notifications_succeeded(characteristic)
print("BMS request generic data")
self.response=bytearray()
self.bms_write_characteristic.write_value(bytes([0xDD,0xA5,0x03,0x00,0xFF,0xFD,0x77]));
def characteristic_enable_notifications_failed(self, characteristic, error):
super.characteristic_enable_notifications_failed(characteristic, error)
print("BMS notification failed:",error)
def characteristic_value_updated(self, characteristic, value):
self.response+=value
if (self.response.endswith(b'w')):
self.response=self.response[4:]
temperature= (int.from_bytes(self.response[23+1*2:1*2+25],'big')-2731)/10
print("Temperature is: "+str(temperature) + " C")
def characteristic_write_value_failed(self, characteristic, error):
print("BMS write failed:",error)
device = AnyDevice(mac_address="A4:C1:38:A0:59:EB", manager=manager)
device.connect()
manager.run()
Terminal print, even if the value changes and the manager is running:
>>>BMS found
>>>BMS request generic data
>>>Temperature is: 19 C
#there program get stuck even if value is changing
thank you, I edited the program with notifications and as you can see, it supports it.
But I have a problem here that even if the values (temperatures) change and manager in in manager.run (), the terminal will send me only one value and did nothing else even if I heat the device. when I restart the program the value changes again and only one remains. Do I have a code written correctly, please?
Thnak you so much for your time sir.
My assumption is that you are using the gatt-python library.
The line manager.run() is starting the event loop so you do not need to have a while loop in your code.
If the temperature characteristic supports notifications, then turning them on would be the most efficient way of reading the values when they change.
If the device does not have notifications then creating a timed event to read the temperature at the frequency you require would be recommended. The documentation for for timeout_add_seconds isn't always the easiest to understand, but the import is:
from gi.repository import GLib
Then just before you run the event loop call:
GLib.timeout_add_seconds(2, my_callback_to_read_temperature)
I expect gi.repository to be installed on the RPi already but if you need the instructions for installing, then they are at: https://pygobject.readthedocs.io/en/latest/getting_started.html#ubuntu-getting-started
If I have a python script running (with full Tkinter GUI and everything) and I want to pass the live data it is gathering (stored internally in arrays and such) to another python script, what would be the best way of doing that?
I cannot simply import script A into script B as it will create a new instance of script A, rather than accessing any variables in the already running script A.
The only way I can think of doing it is by having script A write to a file, and then script B get the data from the file. This is less than ideal however as something bad might happen if script B tries to read a file that script A is already writing in. Also I am looking for a much faster speed to communication between the two programs.
EDIT:
Here are the examples as requested. I am aware why this doesn't work, but it is the basic premise of what needs to be achieved. My source code is very long and unfortunately confidential, so it is not going to help here. In summary, script A is running Tkinter and gathering data, while script B is views.py as a part of Django, but I'm hoping this can be achieved as a part of Python.
Script A
import time
i = 0
def return_data():
return i
if __name__ == "__main__":
while True:
i = i + 1
print i
time.sleep(.01)
Script B
import time
from scriptA import return_data
if __name__ == '__main__':
while True:
print return_data() # from script A
time.sleep(1)
you can use multiprocessing module to implement a Pipe between the two modules. Then you can start one of the modules as a Process and use the Pipe to communicate with it. The best part about using pipes is you can also pass python objects like dict,list through it.
Ex:
mp2.py:
from multiprocessing import Process,Queue,Pipe
from mp1 import f
if __name__ == '__main__':
parent_conn,child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print(parent_conn.recv()) # prints "Hello"
mp1.py:
from multiprocessing import Process,Pipe
def f(child_conn):
msg = "Hello"
child_conn.send(msg)
child_conn.close()
If you wanna read and modify shared data, between 2 scripts, which run separately, a good solution is, take advantage of the python multiprocessing module, and use a Pipe() or a Queue() (see differences here). This way, you get to sync scripts, and avoid problems regarding concurrency and global variables (like what happens if both scripts wanna modify a variable at the same time).
As Akshay Apte said in his answer, the best part about using pipes/queues, is that you can pass python objects through them.
Also, there are methods to avoid waiting for data, if there hasn't been any passed yet (queue.empty() and pipeConn.poll()).
See an example using Queue() below:
# main.py
from multiprocessing import Process, Queue
from stage1 import Stage1
from stage2 import Stage2
s1= Stage1()
s2= Stage2()
# S1 to S2 communication
queueS1 = Queue() # s1.stage1() writes to queueS1
# S2 to S1 communication
queueS2 = Queue() # s2.stage2() writes to queueS2
# start s2 as another process
s2 = Process(target=s2.stage2, args=(queueS1, queueS2))
s2.daemon = True
s2.start() # Launch the stage2 process
s1.stage1(queueS1, queueS2) # start sending stuff from s1 to s2
s2.join() # wait till s2 daemon finishes
# stage1.py
import time
import random
class Stage1:
def stage1(self, queueS1, queueS2):
print("stage1")
lala = []
lis = [1, 2, 3, 4, 5]
for i in range(len(lis)):
# to avoid unnecessary waiting
if not queueS2.empty():
msg = queueS2.get() # get msg from s2
print("! ! ! stage1 RECEIVED from s2:", msg)
lala = [6, 7, 8] # now that a msg was received, further msgs will be different
time.sleep(1) # work
random.shuffle(lis)
queueS1.put(lis + lala)
queueS1.put('s1 is DONE')
# stage2.py
import time
class Stage2:
def stage2(self, queueS1, queueS2):
print("stage2")
while True:
msg = queueS1.get() # wait till there is a msg from s1
print("- - - stage2 RECEIVED from s1:", msg)
if msg == 's1 is DONE ':
break # ends loop
time.sleep(1) # work
queueS2.put("update lists")
EDIT: just found that you can use queue.get(False) to avoid blockage when receiving data. This way there's no need to check first if the queue is empty. This is no possible if you use pipes.
You could use the pickling module to pass data between two python programs.
import pickle
def storeData():
# initializing data to be stored in db
employee1 = {'key' : 'Engineer', 'name' : 'Harrison',
'age' : 21, 'pay' : 40000}
employee2 = {'key' : 'LeadDeveloper', 'name' : 'Jack',
'age' : 50, 'pay' : 50000}
# database
db = {}
db['employee1'] = employee1
db['employee2'] = employee2
# Its important to use binary mode
dbfile = open('examplePickle', 'ab')
# source, destination
pickle.dump(db, dbfile)
dbfile.close()
def loadData():
# for reading also binary mode is important
dbfile = open('examplePickle', 'rb')
db = pickle.load(dbfile)
for keys in db:
print(keys, '=>', db[keys])
dbfile.close()
This will pass data to and from two running scripts using TCP host socket. https://zeromq.org/languages/python/. required module zmq: use( pip install zmq ).
This this is called a client server communication. The server will wait for the client to send a request. The client will also not run if the server is not running. In addition, this client server communication allows for you to send a request from one device(client) to another device(server), as long as the client and server are on the same network and you change localhost (localhost for the server is marked with: * )to the actual IP of your device(server)( IP help( go into your device network settings, click on your network icon, find advanced or properties, look for IP address. note this may be different from going to google and asking for your ip. I am using IPV6 so. DDOS protection.)) Change the localhost IP of the client to the server IP. QUESTION to OP. Do you have to have script b always running or can script b be imported as a module to script a? If so look up how to make python modules.
I solved the same problem using the lib Shared Memory Dict, it's a very simple dict implementation of multiprocessing.shared_memory.
Source1.py
from shared_memory_dict import SharedMemoryDict
from time import sleep
smd_config = SharedMemoryDict(name='config', size=1024)
if __name__ == "__main__":
smd_config["status"] = True
while True:
smd_config["status"] = not smd_config["status"]
sleep(1)
Source2.py
from shared_memory_dict import SharedMemoryDict
from time import sleep
smd_config = SharedMemoryDict(name='config', size=1024)
if __name__ == "__main__":
while True:
print(smd_config["status"])
sleep(1)
The next script I'm using is used to listen to IMAP connection using IMAP IDLE and depends heavily on threads. What's the easiest way for me to eliminate the treads call and just use the main thread?
As a new python developer I tried editing def __init__(self, conn): method but just got more and more errors
A code sample would help me a lot
#!/usr/local/bin/python2.7
print "Content-type: text/html\r\n\r\n";
import socket, ssl, json, struct, re
import imaplib2, time
from threading import *
# enter gmail login details here
USER="username#gmail.com"
PASSWORD="password"
# enter device token here
deviceToken = 'my device token x x x x x'
deviceToken = deviceToken.replace(' ','').decode('hex')
currentBadgeNum = -1
def getUnseen():
(resp, data) = M.status("INBOX", '(UNSEEN)')
print data
return int(re.findall("UNSEEN (\d)*\)", data[0])[0])
def sendPushNotification(badgeNum):
global currentBadgeNum, deviceToken
if badgeNum != currentBadgeNum:
currentBadgeNum = badgeNum
thePayLoad = {
'aps': {
'alert':'Hello world!',
'sound':'',
'badge': badgeNum,
},
'test_data': { 'foo': 'bar' },
}
theCertfile = 'certif.pem'
theHost = ('gateway.push.apple.com', 2195)
data = json.dumps(thePayLoad)
theFormat = '!BH32sH%ds' % len(data)
theNotification = struct.pack(theFormat, 0, 32,
deviceToken, len(data), data)
ssl_sock = ssl.wrap_socket(socket.socket(socket.AF_INET,
socket.SOCK_STREAM), certfile=theCertfile)
ssl_sock.connect(theHost)
ssl_sock.write(theNotification)
ssl_sock.close()
print "Sent Push alert."
# This is the threading object that does all the waiting on
# the event
class Idler(object):
def __init__(self, conn):
self.thread = Thread(target=self.idle)
self.M = conn
self.event = Event()
def start(self):
self.thread.start()
def stop(self):
# This is a neat trick to make thread end. Took me a
# while to figure that one out!
self.event.set()
def join(self):
self.thread.join()
def idle(self):
# Starting an unending loop here
while True:
# This is part of the trick to make the loop stop
# when the stop() command is given
if self.event.isSet():
return
self.needsync = False
# A callback method that gets called when a new
# email arrives. Very basic, but that's good.
def callback(args):
if not self.event.isSet():
self.needsync = True
self.event.set()
# Do the actual idle call. This returns immediately,
# since it's asynchronous.
self.M.idle(callback=callback)
# This waits until the event is set. The event is
# set by the callback, when the server 'answers'
# the idle call and the callback function gets
# called.
self.event.wait()
# Because the function sets the needsync variable,
# this helps escape the loop without doing
# anything if the stop() is called. Kinda neat
# solution.
if self.needsync:
self.event.clear()
self.dosync()
# The method that gets called when a new email arrives.
# Replace it with something better.
def dosync(self):
print "Got an event!"
numUnseen = getUnseen()
sendPushNotification(numUnseen)
# Had to do this stuff in a try-finally, since some testing
# went a little wrong.....
while True:
try:
# Set the following two lines to your creds and server
M = imaplib2.IMAP4_SSL("imap.gmail.com")
M.login(USER, PASSWORD)
M.debug = 4
# We need to get out of the AUTH state, so we just select
# the INBOX.
M.select("INBOX")
numUnseen = getUnseen()
sendPushNotification(numUnseen)
typ, data = M.fetch(1, '(RFC822)')
raw_email = data[0][1]
import email
email_message = email.message_from_string(raw_email)
print email_message['Subject']
#print M.status("INBOX", '(UNSEEN)')
# Start the Idler thread
idler = Idler(M)
idler.start()
# Sleep forever, one minute at a time
while True:
time.sleep(60)
except imaplib2.IMAP4.abort:
print("Disconnected. Trying again.")
finally:
# Clean up.
#idler.stop() #Commented out to see the real error
#idler.join() #Commented out to see the real error
#M.close() #Commented out to see the real error
# This is important!
M.logout()
As far as I can tell, this code is hopelessly confused because the author used the "imaplib2" project library which forces a threading model which this code then never uses.
Only one thread is ever created, which wouldn't need to be a thread but for the choice of imaplib2. However, as the imaplib2 documentation notes:
This module presents an almost identical API as that provided by the standard python library module imaplib, the main difference being that this version allows parallel execution of commands on the IMAP4 server, and implements the IMAP4rev1 IDLE extension. (imaplib2 can be substituted for imaplib in existing clients with no changes in the code, but see the caveat below.)
Which makes it appear that you should be able to throw out much of class Idler and just use the connection M. I recommend that you look at Doug Hellman's excellent Python Module Of The Week for module imaplib prior to looking at the official documentation. You'll need to reverse engineer the code to find out its intent, but it looks to me like:
Open a connection to GMail
check for unseen messages in Inbox
count unseen messages from (2)
send a dummy message to some service at gateway.push.apple.com
Wait for notice, goto (2)
Perhaps the most interesting thing about the code is that it doesn't appear to do anything, although what sendPushNotification (step 4) does is a mystery, and the one line that uses an imaplib2 specific service:
self.M.idle(callback=callback)
uses a named argument that I don't see in the module documentation. Do you know if this code ever actually ran?
Aside from unneeded complexity, there's another reason to drop imaplib2: it exists independently on sourceforge and PyPi which one maintainer claimed two years ago "An attempt will be made to keep it up-to-date with the original". Which one do you have? Which would you install?
Don't do it
Since you are trying to remove the Thread usage solely because you didn't find how to handle the exceptions from the server, I don't recommend removing the Thread usage, because of the async nature of the library itself - the Idler handles it more smoothly than a one thread could.
Solution
You need to wrap the self.M.idle(callback=callback) with try-except and then re-raise it in the main thread. Then you handle the exception by re-running the code in the main thread to restart the connection.
You can find more details of the solution and possible reasons in this answer: https://stackoverflow.com/a/50163971/1544154
Complete solution is here: https://www.github.com/Elijas/email-notifier