How to run an object whose functions can be called as threads - python

I'm writing a script for a Raspberry Pi in Python whose purpose is to listen to a server/message broker for commands and execute said commands with certain hardware. Sometimes, those commands must last for a specified duration (i.e. I need something to turn on, stay on for t seconds, then turn off) and this is accomplished by having the code sleep for said duration between on and off commands (this happens inside a function call -- hardware1.on(dur = t)). I would like to be able to interrupt that sequence with another command (such as turning the hardware off before t seconds is up). I've tried to accomplish this via multiprocessing, but cannot get the behavior I'm looking for.
This hardware (a stalk of differently colored lights) is controlled via a class, LiteStalk. This class is made up of Lite objects (each light in the stalk), which have their own class too. Both classes inherit multiprocessing.process. In my main code that creates a specific LiteStalk and then listens to a message broker (MQTT-based) for commands, I evaluate the commands published to the broker (this is in the on_message callback which runs when a message is published to the broker).
import time
import LiteCntrlModule as LiteStalkMod
import multiprocessing
import paho.mqtt.client as mqtt
print('Starting...\n')
# Set gpio designatin mode to BCM
gpio.setmode(gpio.BCM)
# Initialize light stalk
stalkdict = {'red':1, 'yel':2, 'grn':3, 'bzr':4}
stalk = LiteStalkMod.LiteStalk(stalkdict)
msgRec = ""
def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))
if(rc == 0):
print('Code "0" indicates successful connection. Waiting for messages...')
# Subscribing in on_connect() means that if we lose the connection and
# reconnect then subscriptions will be renewed.
client.subscribe("asset/andon1/state")
# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
print(msg.topic+" "+str(msg.payload))
msgRec = msg.payload
eval(msg.payload)
if msg.payload == "stalk.off()":
print("If this executes while another command is running, it works!")
client = mqtt.Client(client_id="")
client.username_pw_set("mytopic", password="mypassword")
client.on_connect = on_connect
client.on_message = on_message
client.connect("mymessagebrokeraddress", 1883, 60)
client.subscribe("mytopic")
# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
try:
client.loop_start() # start listening in a thread and proceed
except KeyboardInterrupt: # so that aborting with Ctrl+C works cleanly
stalk.off()
finally:
stalk.shutDown()
LiteCtnrlModule (Lite and LiteStalk classes) follows:
import time
import multiprocessing
from relay_lib_seeed import *
class Lite(multiprocessing.Process):
# A Lite object has an associated relay and functions
# Ex: red
# A lite can be controlled
# Ex: red.blink()
def __init__(self, relayIn):
# Ex: red = Lite.Lite(1)
multiprocessing.Process.__init__(self) # allows you to create multiple objects that can be run as threads
self.daemon = True # creates a daemon thread that will exit when the main code terminates
self.start() # allows multiproc. to begin
self.relay = relayIn
def shutDown(self):
# terminates the threaded object
relay_off(self.relay)
self.join()
def off(self, dur = 0):
# turns light off
def on(self, dur = 0):
# turns light on, optional duration to stay on for
# blink
def blink(self, dur = 0, timeOn = .5, timeOff = .5):
# blinks light
class LiteStalk(multiprocessing.Process):
# A LiteStalk object can have any number of "lite" onjects in it. Ex:
# Object: stalk1
# A lite object in stalk1 respresents one segment/color of the light stalk
# stalk1.red
# Any lite can be turned on/off in various patterns for amounts of time, etc.
# stalk1.red.blink()
# An entire stalk can be controlled all at once
# stalk1.cycle()
liteList = {}
def __init__(self, liteListIn):
# liteListIn = {'clr1':relay1, 'clr2":relay2, 'clr3':relay3]...}
self.liteList = liteListIn;
multiprocessing.Process.__init__(self) # allows you to create multiple objects that can be run as threads
self.daemon = True # creates a daemon thread that will exit when the main code terminates
self.start() # allows multiproc. to begin
for lite in self.liteList: # for each lite color string in the lites dict
setattr(self, lite, Lite(self.liteList[lite])) # creates a lite obj attr in the LiteStalk obj
print(self.liteList)
def shutDown(self):
# each light is turned off and that gpio pin is cleaned-up
relay_all_off()
self.join() # joins thread
def off(self, dur = 0):
# turns all hardware off
def on(self):
# turns all hardware on, optional duration to stay on for
def blink(self, timeOn, timeOff):
# blinks all hardware
def cntDn(self, dur = 20, yelDur = 2, redDur = 10): #in min
# enters a count down sequence
The command always runs to its completion before being executing whatever other commands were published to the server, i.e. the stalk stays on for the commanded duration, and cannot be commanded to turn off (or anything else) before the duration is up. I think this may be becuase I am not including all the functionality of my multiprocessing-able objects in a run() function, but I've messed around with that with no luck.

The Paho MQTT client is single threaded (the thread you start with the client.loop_start() function) it can only call on_message() for one message at a time.
This means it will block in the call to eval() until whatever was passed to it has finished, even if that is code is creating new threads to do things.

I was going to suggest replacing sleep with waiting with a timeout on a threading.Event or the equivalent and then checking when the sleep ends if it was due to the event being set or the timeout. If the event was set, stop.
But it seems there are other issues than just an interrupt-able sleep.

Related

Paho MQTT stops receiving and publishing messages after a day or two

I have been having some interesting issues recently with Python and MQTT.
Basically, my code is subscribing to a topic, and every time there is a new message published, it tries to control a device. Now, this is a blocking function and thus is run in a separate thread, so that on_message() would return immediately.
Additionally, the code publishes a status to a topic every 60 seconds. The code runs fine in the beginning, often a day or two. The device is being controlled via subscribed MQTT messages and the status is published just fine.
Then, it suddenly stops receiving any MQTT messages and also stops publishing them. The publish() function however, does not indicate that there would be problems, and is_connected() returns True. Restarting the program allows it to run another day or two. Below is the full code.
import paho.mqtt.client as mqtt
import json
import threading
​
class Controller():
def __init__(self):
self.mqtt_client = mqtt.Client()
self.pub_topic = "outgoing"
self.mqtt_client.on_message = self.on_message
self.mqtt_client.connect("192.168.1.1", 1883, 600)
self.mqtt_client.subscribe("incoming")
​
# This is a blocking function, execution takes approximately 5 minutes.
# The function only runs if there is no existing thread running it yet.
def control_device(self, input_commands):
print("Do some stuff...")
​
def process_mqtt(self, msg):
mqtt_msg = json.loads(msg.payload.decode('utf-8'))
self.control_device(mqtt_msg)
payload = '{"message": "process started"}'
self.mqtt_client.publish(self.pub_topic, payload)
​
def on_message(self, client, userdata, msg):
thread = threading.Thread(target=self.process_mqtt, args=(msg,))
thread.start()
​
# Status is sent to the same topic every 60 seconds
def send_status_msg(self):
if minute_passed:
payload = '{"status": 0}'
self.mqtt_client.publish(self.pub_topic, payload)
​
def run(self):
while True:
self.mqtt_client.loop()
self.send_status_msg()
​
if __name__ == "__main__":
c = Controller()
c.run()
Is there something I have not understood about how the MQTT library works? I found some discussion about how you should not publish inside on_message(), but in this case it is put into a separate thread.

How to stop a websocket client without stopping reactor

I have an app similar to a chat-room writing in python that intends to do the following things:
A prompt for user to input websocket server address.
Then create a websocket client that connects to server and send/receive messages. Disable the ability to create a websocket client.
After receiving "close" from server (NOT a close frame), client should drop connecting and re-enable the app to create a client. Go back to 1.
If user exits the app, it exit the websocket client if there is one running.
My approach for this is using a main thread to deal with user input. When user hits enter, a thread is created for WebSocketClient using AutoBahn's twisted module and pass a Queue to it. Check if the reactor is running or not and start it if it's not.
Overwrite on message method to put a closing flag into the Queue when getting "close". The main thread will be busy checking the Queue until receiving the flag and go back to start. The code looks like following.
Main thread.
def main_thread():
while True:
text = raw_input("Input server url or exit")
if text == "exit":
if myreactor:
myreactor.stop()
break
msgq = Queue.Queue()
threading.Thread(target=wsthread, args=(text, msgq)).start()
is_close = False
while True:
if msgq.empty() is False:
msg = msgq.get()
if msg == "close":
is_close = True
else:
print msg
if is_close:
break
print 'Websocket client closed!'
Factory and Protocol.
class MyProtocol(WebSocketClientProtocol):
def onMessage(self, payload, isBinary):
msg = payload.decode('utf-8')
self.Factory.q.put(msg)
if msg == 'close':
self.dropConnection(abort=True)
class WebSocketClientFactoryWithQ(WebSocketClientFactory):
def __init__(self, *args, **kwargs):
self.queue = kwargs.pop('queue', None)
WebSocketClientFactory.__init__(self, *args, **kwargs)
Client thread.
def wsthread(url, q):
factory = WebSocketClientFactoryWithQ(url=url, queue=q)
factory.protocol = MyProtocol
connectWS(Factory)
if myreactor is None:
myreactor = reactor
reactor.run()
print 'Done'
Now I got a problem. It seems that my client thread never stops. Even if I receive "close", it seems still running and every time I try to recreate a new client, it creates a new thread. I understand the first thread won't stop since reactor.run() will run forever, but from the 2nd thread and on, it should be non-blocking since I'm not starting it anymore. How can I change that?
EDIT:
I end up solving it with
Adding stopFactory() after disconnect.
Make protocol functions with reactor.callFromThread().
Start the reactor in the first thread and put clients in other threads and use reactor.callInThread() to create them.
Your main_thread creates new threads running wsthread. wsthread uses Twisted APIs. The first wsthread becomes the reactor thread. All subsequent threads are different and it is undefined what happens if you use a Twisted API from them.
You should almost certainly remove the use of threads from your application. For dealing with console input in a Twisted-based application, take a look at twisted.conch.stdio (not the best documented part of Twisted, alas, but just what you want).

How to keep multiple Paho MQTT Clients running as a service/daemon

I want to implement a Paho MQTT Python Service which is always running, receiving and sending messages. If an error occurs in any instance it should restart.
I implemented two classes which each start a threaded network loop with paho's loop_start(). These classes then have some callback functions which call other classes and so on.
For now i have a simple Python script which calls the classes and loops:
from one import one
from two import two
import time
one()
two()
while True:
if one.is_alive():
print("one is still alive")
else:
print("one died - do something!")
time.sleep(1)
And here my class "one":
import paho.mqtt.client as mqtt
import json
class one():
def __init__(self):
self.__client = mqtt.Client(client_id = "one")
self.__client.connect("localhost", 1883)
self.__client.subscribe("one")
self.__client.on_connect = self.__on_connect
self.__client.on_message = self.__on_message
self.__client.on_disconnect = self.__on_disconnect
self.__client.loop_start()
def __on_connect(self, client, userdata, flags, rc):
print("one: on_connect")
def __on_disconnect(self, client, userdata, flags, rc):
print("one: on_disconnect")
def __on_message(self, client, userdata, message):
str_message = message.payload.decode('utf-8')
message = json.loads(str_message)
print("one: on_message: " + str(message))
def is_alive(self):
return True
However - if I send a package which produces an error (a pickled message instead of json for example) my "is_alive"-function is still returning True but the paho-implementation is not responsive anymore. So no further messages are sent to on_message. So only a part of the class is still responsive!?
Class "two" is still responsive and the script is running in the "while True" still.
How do i properly check the functionality of such a class?
I think you have to build a checker method like class1.isAlive() which tells you if the class is waitng for requests. Also I think you have to build this in the while True loop and react than of failures.
Additionally, you could write your own event with a wait function. Wait is more CPU hungry but it is more responsive. See here for example. But it depends on your python version.

How to get out of the while loop if I receives a specific string?

My query is all about that I have make a raspberry pi as a server and I need to control the gpio of pi through another device using sockets.My server code is about that when I send string 'on','off','blink',through the client, the led must on,off,blink. Although led on/off/blink are all successful but i'm facing issue during led blinking. If the client transmits a string on/off during the operation of led blinking, then the on/off operation of led is unsuccessful.So how can I do that?
Any help is appreciated.
# server code
import socket
import time
import sys
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(12,GPIO.OUT)
GPIO.output(12,False)
ms=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
ms.bind(('',1234))
ms.listen(1)
conn,addr=ms.accept()
print('Connection Established from ',addr)
while True:
data=conn.recv(1000)
print(addr,': ',data)
if not data:
break
elif data==b'raspberry\n':
print('Hello Pi')
elif data==b'on\n':
print('led is turned ON')
GPIO.output(12,True)
elif data==b'off\n':
print('led is turned OFF')
GPIO.output(12,False)
elif data==b'blink\n':
print('led blinking')
while True: #.... Here Is my query....
GPIO.output(12,True)
time.sleep(0.5)
GPIO.output(12,False)
time.sleep(0.5)
if conn.recv(1000)==b'on\n' or conn.recv(1000)==b'off\n':
break
elif data==b'exit\n':
print('Goodbye..')
time.sleep(1.5)
break
conn.close()
ms.close()
sys.exit()
The thing is that you consume the received data, but you do not store it in data.
You need to store the received data in data:
while True:
GPIO.output(12,True)
time.sleep(0.5)
GPIO.output(12,False)
time.sleep(0.5)
data = conn.recv(1000)
if data==b'on\n' or data==b'off\n':
break
But then, data will be overwritten at the next iteration, by data = conn.recv(1000).
Therefore, you need to respond to the on/off instruction at this very point.
while True:
GPIO.output(12,True)
time.sleep(0.5)
GPIO.output(12,False)
time.sleep(0.5)
data = conn.recv(1000)
if data==b'on\n':
#on
break
elif data==b'off\n':
#off
break
As you asked for it, here is a simple way to handle a network application.
First, have a thread that receives the messages.
import threading
import queue
class Receiver(threading.Thread):
def __init__(self, connection):
super().__init__()
self.connection = connection
self.messages = queue.Queue()
def run(self):
while True:
message = self.connection.recv(1000)
self.queue.put(message)
def get_message(self):
if not self.queue.empty():
return self.queue.get()
else:
return None
Then, in your main loop, instead of waiting for message from conn, consume the queue of a Receiver object.
receiver = Receiver(conn) # conn is the connection object returned by ms.accept()
receiver.start()
while True:
message = receiver.get_message()
if message is not None:
process_message(message)
Ok so there's a lot going on here, but it's really not that hard.
First, I define a Receiver class, that extends threading.Thread, and overwrites __init__ and run.
The __init__ method is used to set the attributes that will allow fetching the messages, and the run method describes what the thread will do.
This thread will run a perpetual while loop, in which it will receive messages from the network, and put them into a queue.
By the way, the queue module give synchronized queues, among which, Queue.
It's a good idea to use these instead of a list in a threaded context.
Besides, it's not so trivial to get the objects from a Queue object, so I define a get_message method in the Receiver class, that will get the job done for me.
Then I instantiate the Receiver class, by passing it the conn object received from ms.accept, and I start my thread.
Finally, I run my main while loop, in which I consume the messages from the receiver's queue.
So what does it change?
The receiving methods, here conn.recv, are blocking, which means they halt the execution flow of their thread.
By putting them in their own thread, the main thread will not be paused.
Through the Queue object, the main thread can fetch data from the receiving thread, but without getting blocked.
If there is data, then it takes it. If there is not, it just continues.

requestloop(loopCondition) doesn't release even after loopCondition is False

I have some issues with the requestLoop methode of the Pyro4.Daemon object.
What I want is to call remotely a "stop()" method for releasing the requestLoop function and shutdown my daemon.
This small exemple doesn't work
SERVER
#!/usr/bin/python
# -*- coding: utf-8 -*-
from daemon import Pyro4
class Audit(object):
def start_audit(self):
with Pyro4.Daemon() as daemon:
self_uri = daemon.register(self)
ns = Pyro4.locateNS()
ns.register("Audit", self_uri)
self.running = True
print("starting")
daemon.requestLoop(loopCondition=self.still_running)
print("stopped")
self.running = None
def hi(self, string):
print string
def stop(self):
self.running = False
def still_running(self):
return self.running
def main():
# lancement de l'auditor
auditor = Audit()
auditor.start_audit()
if __name__ == "__main__" :
main()
CLIENT
import Pyro4
def main():
with Pyro4.Proxy("PYRONAME:Audit") as au:
au.hi("hello")
au.hi("another hi")
au.stop()
What I expect is to see the server print "hello" and "another hi" and then shutdown.
But the shutdown doesn't happen, the server is still blocked in the requestloop method.
I can use my proxy as long as I want.
BUT, if I create another client, at the first remote call, the server will shutdown and the client will throw an error:
Pyro4.errors.ConnectionClosedError: receiving: not enough data
All my test are saying that I need to create a 2nd proxy and throw the exeption for pass the requestloop on my server.
Does any one have a idea of how to clean this issue ?
If you look at the examples/callback/client.py in the sources you'll see the following comment:
# We need to set either a socket communication timeout,
# or use the select based server. Otherwise the daemon requestLoop
# will block indefinitely and is never able to evaluate the loopCondition.
Pyro4.config.COMMTIMEOUT=0.5
Hence, you need to do is set the COMMTIMEOUT in your server file and it will work fine according to my tests.
Note: You can also add a print statement to the still_running method to check when it's being called. Without the configuration above, you'll see that it looks like the method is executed only when a new event is received, so the server doesn't shutdown after the next event to the one that set running to False is received. For example, if you execute the client program twice, the server will shutdown.

Categories