I'm reading data from the PLC of a machine using a Python script. The core of the script is a while loop that runs every second and reads whatever data the machine sends. What I want to do is saving the data in a database and, at the same time, "publishing" some of the data using websockets so that I can show it in realtime on a web application. The db part is working.
For my other goal, I've been reading websockets documentation and I've started a very simple websocket server like this
import asyncio
import websockets
async def create_socket(data):
async def hello(websocket, path):
await websocket.send(data)
start_server = websockets.serve(hello, "localhost", 5678)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
The "data" input is a json and I can read it without any problems from my JS script but this, of course, blocks the main queue and, since I'm calling the above code from the mentioned while loop, I can't get pass the first iteration. What I would like to do is continuing to go through my while loop and asynchronously update the websocket with the data I read from the plc at every iteration.
Do you have any tips?
Thanks!
Related
I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.
I have run the below code in the Python Shell:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:9092')
future = producer.send('hello-topic', b'Hello, World!')
This works perfectly in that the Kafka consumer picks up the messages.
BUT...
Running it via a script does nothing.
Am I missing something obvious?
The only way to get it working as a script is to add this line...
future.get(timeout=10)
Any help would be appreciated.
kafka send() details from the link : send() is asynchronous. When called it adds the record to a buffer of pending record sends and immediately returns. This allows the producer to batch together individual records for efficiency.
You can use flush()/poll() method to send the message immediately.
I'm struggling understanding a "weird" behavior of my simple script. Basically, it works as expected if time.sleep() is set as 60s but as soon as I put a value above 90 (90 is the limit apparently in my case), the loop doesn't work properly. I discovered this when I was trying to pause the script for 3 mins.
Here's my script
from gpiozero import CPUTemperature
import time
import paho.mqtt.client as mqtt #import the client1
import psutil
broker_address="192.168.1.17"
client = mqtt.Client("P1") #create new instance
client.connect(broker_address) #connect to broker
#time.sleep(60)
while True:
cpu = CPUTemperature()
print(cpu.temperature)
#a=cpu.temperature
#print(psutil.cpu_percent())
#print(psutil.virtual_memory()[2])
#print(a)
client.publish("test/message",cpu.temperature)
#client.publish("test/ram", psutil.virtual_memory()[2])
#client.publish("test/cpu", psutil.cpu_percent())
time.sleep(91)
In this case, with 91s it just prints the value of cpu.temperature every 91s, whereas with a value like 60s, besides printing, it also publishes the value via mqtt every cycle.
Am I doing something wrong here? Or for a longer sleep I need to change my code? I'm running this on a RaspberryPi.
Thanks in advance
EDIT:
I solved modifying the script, in particular how mqtt was handling the timing
here's the new script
mqttc=mqtt.Client("P1")
#mqttc.on_connect = onConnect
#mqttc.on_disconnect = onDisconnect
mqttc.connect("192.168.1.17", port=1883, keepalive=60)
mqttc.loop_start()
while True:
cpu = CPUTemperature()
print(cpu.temperature)
mqttc.publish("test/message",cpu.temperature)
time.sleep(300)
The MQTT client uses a network thread to handle a number of different aspects of the connection to the broker.
Firstly, it handles sending ping request to the broker in order to keep the connection alive. The default period for the keepalive period is 60 seconds. The connection will be dropped by the broker if it does not receive any messages in 1.5 times this value, which just happens to be 90 seconds.
Secondly, the thread handles any incoming messages that the client may have subscribed to.
Thirdly, if you try to publish a message that is bigger than the MTU of the network link, calling mqttc.publish() will only send the first packet and the loop is needed to send the rest of the payload.
There are 2 ways to run the network tasks.
As you have found, you can start a separate thread with the mqttc.loop_start()
The other option is to call mqttc.loop() within your own while loop
I have an application that listens to updates from a Firestore collection using google-cloud-firestore. For each update I need to do upload some data to an FTP server which takes time. Receiving a lot of data at the same time introduces delay that is not acceptable and I figure the answer is async callback (i.e. do not wait for my callback to end before continuing) but is that possible.
Imagine a script like this
from google.cloud.firestore import Client
import time
def callback(col_snapshot, changes, read_time):
print("Received updates")
# mock FTP upload
time.sleep(1)
print("Finished handling the updates")
Client().collection('news').on_snapshot(callback)
while True:
pass
How can I modify that code so it doesn't queue each callback.
Update
I've created a feature request at google-cloud-firestore
What you need to do is use one of the approaches mentioned in this SO question
My suggestion is using multiprocessing module in Python 3
My python daemon process stops working when its asyncio run_forever loop listens to websocket calls that originate from a separate run_until_complete asyncio coroutine (or thread) but runs within the same process (PID). More specifically, I code a localhost server in Python 3.4.3 that updates via the webbrowser function an HTML web page in my firefox webbrowser. I then try to capture button presses elicited in a temporary popup window overlay and relay the associated action strings via websocket calls back to the daemonized server.
Things work fine and calls are processed flawlessly in the websocket server embedded in the run_for_ever asyncio loop when the websocket client call comes from an independent non-demonized PID invoked via a command-line call to the same python script. Things also work fine for the websocket server when an HTML-GUI-based websocket call hits the run_for_ever asyncio loop. But things go wrong when an initial asyncio coroutine process requires additional user-input - through a locking HTML window overlay and buttons such as 'accept', 'cancel' or 'quit' - and thereby attempts to capture the button press related websocket string signal through a brief separate run_until_complete asyncio coroutine.
In other words, I try to find a way to control flow through my Python script where intermittently a webbrowser-GUI user-input is required to influence program logic. How can that be achieved in a pure Python solution ?
ok, I found a solution for the problem described above, with two changes :
1) call_soon_threadsafe : this one finally 'isolates' my second asyncio loop so that the first asyncio loop survives when the following line gets invoked :
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe( asyncio.async, websockets.serve( myFunct2, IP, PORT2))
loop.run_forever()
2) I use a separate number for PORT2 for the HTML popup overlay button websocket callback calls which corresponds with the second asyncio websocket loop (see above). In sum, regular GUI callbacks go with the PORT1 number, while the popup GUI calls go with the PORT2 number - for which the second asyncio websocket loop is created temporarily.