So I believe the crux of my issue is that the python GPIO event detection is threaded, and so is the Socket.IO Emit stuff. I'm trying to call an Emit when a GPIO event detection fires. And I'm getting errors with not awaiting the socket.io.emit, when I attempt to throw an await in front of it I get bad syntax.
Anyway I'm simply trying to send messages up to a web client when an interrupt happens on the IO. For example an LED blink happens I want to send that up to the client. Using socket.io does the message to the client part, and GPIO interrupts via event detection does the other part (without tying up the app) I just need to send those event detections from GPIO to the socket.io emits somehow.
Also I'm using a Sanic web server.
Anyway here's the code:
sio = socketio.AsyncServer(async_mode='sanic')
app = Sanic()
sio.attach(app)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
#this gets called when the gpio.add_event_detect event is hit upon the gpio pin #going high (led lighting up). This works fine except sio.emit needs to be #awaited, but if I throw an await in front of it I get bad syntax, I put async #in front of the function def and then I get errors saying I never awaited the #function my_callback_one when it gets called from the GPIO.add_event_detect #event firing. I'm not sure what to do or if this is possible.
def my_callback_one(channel):
sio.emit('my_response', {'data': 'test'})
print('========button press LED lighting detected========')
GPIO.add_event_detect(18, GPIO.RISING, callback=my_callback_one)
#Starts the web application
if __name__ == "__main__":
app.run(host='0.0.0.0', port= 8090, workers=4)
Under normal circumstances I could do emits from the web server in a threaded fashion like so:
async def background_task():
count = 0
while count < 10:
await sio.sleep(.05)
print("message sent to client")
count += 1
await sio.emit('my_response', {'data': count})
pass
#app.listener('before_server_start')
def before_server_start(sanic, loop):
sio.start_background_task(background_task)
Which works fine, but as soon as I try to do that sio.emit from the gpio callback (another thread) I get issues with not awaiting the sio.emit. I've tried making my def my_callback_one(channel) asyc defined and that didn't help. (async def my_callback_one(channel))
I know this is a threading issue but I'm just not sure where to go with it.
Try to put a "#sio.event" before the function, telling sanic that is a socket event (with async and await also well). Here's an example:
https://github.com/miguelgrinberg/python-socketio/blob/master/examples/server/sanic/app.py
Related
I am writing a pyModbus server with asyncio, based on this example.
Alongside the server I've got a serial device which I'm communicating with and a server updating task.
One task should check the status of the serial device every 500ms.
The server updating task should check if there are any changes in the status of the serial device and update the info on the server. Moreover, if there is a request waiting on the server it should call another task which will send necessary info to the serial device.
My three questions are:
How should I stop the server politely? For now the app is running only in console so it is stopped by ctrl+c - how can I stop the server without causing an avalanche of errors?
How can I implement tasks to be executed cyclically (let's say I want to frefresh the server data every 500ms)? I've found the aiocron module but as far as I can tell its functionalities are a bit limtied as it is intended just for calling functions in intervals.
How can I politely cancel all the tasks before stopping the server (the infinitely, cyclically running ones) when closing the app?
Thanks!
EDIT:
Speaking of running cyclical tasks and cancelling them - is this a proper way to do that? This doesn't rise any errors but does it clean eveything correctly? (I created this sketch compiling a dozen of questions on stackoverflow, I am not sure if this makes sense)
import asyncio
async def periodic():
try:
while True:
print('periodic')
await asyncio.sleep(1)
except asyncio.CancelledError as ex:
print('task1', type(ex))
raise
async def periodic2():
try:
while True:
print('periodic2')
await asyncio.sleep(0.5)
except asyncio.CancelledError as ex:
print('task2', type(ex))
raise
async def main():
tasks = []
task = asyncio.create_task(periodic())
tasks.append(task)
task2 = asyncio.create_task(periodic2())
tasks.append(task2)
for task in tasks:
await task
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
pass
I'm writing a MQTT client which simply connects to the broker, publish a message and then disconnect. Here's the code:
def on_connect_v5(client, userdata, flags, rc, properties):
print('connected')
client.publish(topic, payload, 0)
def on_publish(client, userdata, mid):
print(f'mid: {mid}')
client.disconnect()
client = paho.Client(protocol=paho.MQTTv5)
client.on_connect = on_connect_v5
client.on_publish = on_publish
client.connect(host, port, 60)
client.loop_start()
# client.loop_forever()
The question is when I use loop_start(), it seems the client isn't connected successfully, but loop_forever() would work. Have I done something wrong with the loop_start() function, and what's the proper way to use it?
BTW: have tried use the paho.mqtt.publish module and always get a Socket timed out. Appreciated if someone can explain it as well.
The difference is that loop_forever blocks the program. loop_start, only starts a daemon thread, but doesn't block. So your program continues. In the code you show, this means the program exists.
You can read more here: https://github.com/eclipse/paho.mqtt.python#network-loop
Calling loop_start() once, before or after connect*(), runs a thread in the background to call loop() automatically. This frees up the main thread for other work that may be blocking.
loop_forever(). This is a blocking form of the network loop and will not return until the client calls disconnect(). It automatically handles reconnecting.
Your main threads not waiting loop_start(); because its daemon thread. Daemon threads not block the program until finish its job. When your main thread done its your job kill itself. That's the also kill your loop_start() thread. If your main thread has infinite loop or longer loops, your loop_start() works perfectly
I'm writing a script for a Raspberry Pi in Python whose purpose is to listen to a server/message broker for commands and execute said commands with certain hardware. Sometimes, those commands must last for a specified duration (i.e. I need something to turn on, stay on for t seconds, then turn off) and this is accomplished by having the code sleep for said duration between on and off commands (this happens inside a function call -- hardware1.on(dur = t)). I would like to be able to interrupt that sequence with another command (such as turning the hardware off before t seconds is up). I've tried to accomplish this via multiprocessing, but cannot get the behavior I'm looking for.
This hardware (a stalk of differently colored lights) is controlled via a class, LiteStalk. This class is made up of Lite objects (each light in the stalk), which have their own class too. Both classes inherit multiprocessing.process. In my main code that creates a specific LiteStalk and then listens to a message broker (MQTT-based) for commands, I evaluate the commands published to the broker (this is in the on_message callback which runs when a message is published to the broker).
import time
import LiteCntrlModule as LiteStalkMod
import multiprocessing
import paho.mqtt.client as mqtt
print('Starting...\n')
# Set gpio designatin mode to BCM
gpio.setmode(gpio.BCM)
# Initialize light stalk
stalkdict = {'red':1, 'yel':2, 'grn':3, 'bzr':4}
stalk = LiteStalkMod.LiteStalk(stalkdict)
msgRec = ""
def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))
if(rc == 0):
print('Code "0" indicates successful connection. Waiting for messages...')
# Subscribing in on_connect() means that if we lose the connection and
# reconnect then subscriptions will be renewed.
client.subscribe("asset/andon1/state")
# The callback for when a PUBLISH message is received from the server.
def on_message(client, userdata, msg):
print(msg.topic+" "+str(msg.payload))
msgRec = msg.payload
eval(msg.payload)
if msg.payload == "stalk.off()":
print("If this executes while another command is running, it works!")
client = mqtt.Client(client_id="")
client.username_pw_set("mytopic", password="mypassword")
client.on_connect = on_connect
client.on_message = on_message
client.connect("mymessagebrokeraddress", 1883, 60)
client.subscribe("mytopic")
# Blocking call that processes network traffic, dispatches callbacks and
# handles reconnecting.
# Other loop*() functions are available that give a threaded interface and a
# manual interface.
try:
client.loop_start() # start listening in a thread and proceed
except KeyboardInterrupt: # so that aborting with Ctrl+C works cleanly
stalk.off()
finally:
stalk.shutDown()
LiteCtnrlModule (Lite and LiteStalk classes) follows:
import time
import multiprocessing
from relay_lib_seeed import *
class Lite(multiprocessing.Process):
# A Lite object has an associated relay and functions
# Ex: red
# A lite can be controlled
# Ex: red.blink()
def __init__(self, relayIn):
# Ex: red = Lite.Lite(1)
multiprocessing.Process.__init__(self) # allows you to create multiple objects that can be run as threads
self.daemon = True # creates a daemon thread that will exit when the main code terminates
self.start() # allows multiproc. to begin
self.relay = relayIn
def shutDown(self):
# terminates the threaded object
relay_off(self.relay)
self.join()
def off(self, dur = 0):
# turns light off
def on(self, dur = 0):
# turns light on, optional duration to stay on for
# blink
def blink(self, dur = 0, timeOn = .5, timeOff = .5):
# blinks light
class LiteStalk(multiprocessing.Process):
# A LiteStalk object can have any number of "lite" onjects in it. Ex:
# Object: stalk1
# A lite object in stalk1 respresents one segment/color of the light stalk
# stalk1.red
# Any lite can be turned on/off in various patterns for amounts of time, etc.
# stalk1.red.blink()
# An entire stalk can be controlled all at once
# stalk1.cycle()
liteList = {}
def __init__(self, liteListIn):
# liteListIn = {'clr1':relay1, 'clr2":relay2, 'clr3':relay3]...}
self.liteList = liteListIn;
multiprocessing.Process.__init__(self) # allows you to create multiple objects that can be run as threads
self.daemon = True # creates a daemon thread that will exit when the main code terminates
self.start() # allows multiproc. to begin
for lite in self.liteList: # for each lite color string in the lites dict
setattr(self, lite, Lite(self.liteList[lite])) # creates a lite obj attr in the LiteStalk obj
print(self.liteList)
def shutDown(self):
# each light is turned off and that gpio pin is cleaned-up
relay_all_off()
self.join() # joins thread
def off(self, dur = 0):
# turns all hardware off
def on(self):
# turns all hardware on, optional duration to stay on for
def blink(self, timeOn, timeOff):
# blinks all hardware
def cntDn(self, dur = 20, yelDur = 2, redDur = 10): #in min
# enters a count down sequence
The command always runs to its completion before being executing whatever other commands were published to the server, i.e. the stalk stays on for the commanded duration, and cannot be commanded to turn off (or anything else) before the duration is up. I think this may be becuase I am not including all the functionality of my multiprocessing-able objects in a run() function, but I've messed around with that with no luck.
The Paho MQTT client is single threaded (the thread you start with the client.loop_start() function) it can only call on_message() for one message at a time.
This means it will block in the call to eval() until whatever was passed to it has finished, even if that is code is creating new threads to do things.
I was going to suggest replacing sleep with waiting with a timeout on a threading.Event or the equivalent and then checking when the sleep ends if it was due to the event being set or the timeout. If the event was set, stop.
But it seems there are other issues than just an interrupt-able sleep.
I have an app similar to a chat-room writing in python that intends to do the following things:
A prompt for user to input websocket server address.
Then create a websocket client that connects to server and send/receive messages. Disable the ability to create a websocket client.
After receiving "close" from server (NOT a close frame), client should drop connecting and re-enable the app to create a client. Go back to 1.
If user exits the app, it exit the websocket client if there is one running.
My approach for this is using a main thread to deal with user input. When user hits enter, a thread is created for WebSocketClient using AutoBahn's twisted module and pass a Queue to it. Check if the reactor is running or not and start it if it's not.
Overwrite on message method to put a closing flag into the Queue when getting "close". The main thread will be busy checking the Queue until receiving the flag and go back to start. The code looks like following.
Main thread.
def main_thread():
while True:
text = raw_input("Input server url or exit")
if text == "exit":
if myreactor:
myreactor.stop()
break
msgq = Queue.Queue()
threading.Thread(target=wsthread, args=(text, msgq)).start()
is_close = False
while True:
if msgq.empty() is False:
msg = msgq.get()
if msg == "close":
is_close = True
else:
print msg
if is_close:
break
print 'Websocket client closed!'
Factory and Protocol.
class MyProtocol(WebSocketClientProtocol):
def onMessage(self, payload, isBinary):
msg = payload.decode('utf-8')
self.Factory.q.put(msg)
if msg == 'close':
self.dropConnection(abort=True)
class WebSocketClientFactoryWithQ(WebSocketClientFactory):
def __init__(self, *args, **kwargs):
self.queue = kwargs.pop('queue', None)
WebSocketClientFactory.__init__(self, *args, **kwargs)
Client thread.
def wsthread(url, q):
factory = WebSocketClientFactoryWithQ(url=url, queue=q)
factory.protocol = MyProtocol
connectWS(Factory)
if myreactor is None:
myreactor = reactor
reactor.run()
print 'Done'
Now I got a problem. It seems that my client thread never stops. Even if I receive "close", it seems still running and every time I try to recreate a new client, it creates a new thread. I understand the first thread won't stop since reactor.run() will run forever, but from the 2nd thread and on, it should be non-blocking since I'm not starting it anymore. How can I change that?
EDIT:
I end up solving it with
Adding stopFactory() after disconnect.
Make protocol functions with reactor.callFromThread().
Start the reactor in the first thread and put clients in other threads and use reactor.callInThread() to create them.
Your main_thread creates new threads running wsthread. wsthread uses Twisted APIs. The first wsthread becomes the reactor thread. All subsequent threads are different and it is undefined what happens if you use a Twisted API from them.
You should almost certainly remove the use of threads from your application. For dealing with console input in a Twisted-based application, take a look at twisted.conch.stdio (not the best documented part of Twisted, alas, but just what you want).
My query is all about that I have make a raspberry pi as a server and I need to control the gpio of pi through another device using sockets.My server code is about that when I send string 'on','off','blink',through the client, the led must on,off,blink. Although led on/off/blink are all successful but i'm facing issue during led blinking. If the client transmits a string on/off during the operation of led blinking, then the on/off operation of led is unsuccessful.So how can I do that?
Any help is appreciated.
# server code
import socket
import time
import sys
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(12,GPIO.OUT)
GPIO.output(12,False)
ms=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
ms.bind(('',1234))
ms.listen(1)
conn,addr=ms.accept()
print('Connection Established from ',addr)
while True:
data=conn.recv(1000)
print(addr,': ',data)
if not data:
break
elif data==b'raspberry\n':
print('Hello Pi')
elif data==b'on\n':
print('led is turned ON')
GPIO.output(12,True)
elif data==b'off\n':
print('led is turned OFF')
GPIO.output(12,False)
elif data==b'blink\n':
print('led blinking')
while True: #.... Here Is my query....
GPIO.output(12,True)
time.sleep(0.5)
GPIO.output(12,False)
time.sleep(0.5)
if conn.recv(1000)==b'on\n' or conn.recv(1000)==b'off\n':
break
elif data==b'exit\n':
print('Goodbye..')
time.sleep(1.5)
break
conn.close()
ms.close()
sys.exit()
The thing is that you consume the received data, but you do not store it in data.
You need to store the received data in data:
while True:
GPIO.output(12,True)
time.sleep(0.5)
GPIO.output(12,False)
time.sleep(0.5)
data = conn.recv(1000)
if data==b'on\n' or data==b'off\n':
break
But then, data will be overwritten at the next iteration, by data = conn.recv(1000).
Therefore, you need to respond to the on/off instruction at this very point.
while True:
GPIO.output(12,True)
time.sleep(0.5)
GPIO.output(12,False)
time.sleep(0.5)
data = conn.recv(1000)
if data==b'on\n':
#on
break
elif data==b'off\n':
#off
break
As you asked for it, here is a simple way to handle a network application.
First, have a thread that receives the messages.
import threading
import queue
class Receiver(threading.Thread):
def __init__(self, connection):
super().__init__()
self.connection = connection
self.messages = queue.Queue()
def run(self):
while True:
message = self.connection.recv(1000)
self.queue.put(message)
def get_message(self):
if not self.queue.empty():
return self.queue.get()
else:
return None
Then, in your main loop, instead of waiting for message from conn, consume the queue of a Receiver object.
receiver = Receiver(conn) # conn is the connection object returned by ms.accept()
receiver.start()
while True:
message = receiver.get_message()
if message is not None:
process_message(message)
Ok so there's a lot going on here, but it's really not that hard.
First, I define a Receiver class, that extends threading.Thread, and overwrites __init__ and run.
The __init__ method is used to set the attributes that will allow fetching the messages, and the run method describes what the thread will do.
This thread will run a perpetual while loop, in which it will receive messages from the network, and put them into a queue.
By the way, the queue module give synchronized queues, among which, Queue.
It's a good idea to use these instead of a list in a threaded context.
Besides, it's not so trivial to get the objects from a Queue object, so I define a get_message method in the Receiver class, that will get the job done for me.
Then I instantiate the Receiver class, by passing it the conn object received from ms.accept, and I start my thread.
Finally, I run my main while loop, in which I consume the messages from the receiver's queue.
So what does it change?
The receiving methods, here conn.recv, are blocking, which means they halt the execution flow of their thread.
By putting them in their own thread, the main thread will not be paused.
Through the Queue object, the main thread can fetch data from the receiving thread, but without getting blocked.
If there is data, then it takes it. If there is not, it just continues.