Erlang like msgbox event loop - python

I want to implement Erlang like messaging, unless already exists.
The idea is to create multiprocess application (I'm using Ray)
I can imagine how to do the send/recv :
#ray.remote
class Module:
def recv(self, folder, msg ) :
if folder not in self.inbox : self.inbox[folder] = deque()
self.inbox[folder].push(msg)
def send(self, mod, folder, msg): mod.recv(folder,msg)
You call .send() which remotely calls the target module .recv() method
my problem is i dont know how to do the internal eventloop that REACT on messages.
It has to be lightweight too, because it runs in every process.
One idea is while-loop with sleep, but it seems inefficient !!
Probably, when msg arrives it has to trigger some registered FILTER-HOOK if message matches ? So may be no event loop needed but just routines triggered by FILTER !!!

What i did for now is trigger a check routine every time i get a message, which goes trough rules defined as (key:regex, method-to-call) filters

Related

Redis-py: run_in_thread eventhandler stops getting called after few hours

I'm trying to implement a basic pubsub using redis-py client.
The idea is, the publisher is actually a callback that gets called periodically and will publish some information on channel1 in the callback function.
The subscriber will listen on that channel for this message and do some processing accordingly.
The subscriber is actually a basic bare-bones webserver that is deployed on k8s and it simply should show up the messages that it receives via the event_handler function.
subscriber.py
class Sub(object):
def __init___(self):
redis = Redis(host=...,
port=...,
password=...,
db=0)
ps = redis.pubsub(ignore_subscribe_messages=True)
ps.subscribe(**{'channel1': Sub.event_handler})
ps.run_in_thread(sleep_time=0.01, daemon=True)
#staticmethod
def event_handler(msg):
print("Hello from event handler")
if msg and msg.get('type') == 'message': # interested only in messages, not subscribe/unsubscribe/pmessages
# process the message
publisher.py
redis = Redis(host=...,
port=...,
password=...,
db=0)
def call_back(msg):
global redis
redis.publish('channel1', msg)
At the beginning, the messages are published and the subscriber event handler prints and process it correctly.
The problem is, after few hours, the subscriber stops showing up those messages. I've checked publisher logs and the messages definitely get sent out, but I'm not able to figure out why the event_handler is not getting called after few hours.
The print statement in it stops getting printed which is why I say the handler is not getting fired after few hours.
Initially I suspected the thread must have died, but on exec into the system I see it listed under the list of threads.
I've read through a lot of blogs, documentations but haven't found much help.
All I can deduce is the event handler stops getting called after sometime.
Can anyone help understand what's going on and the best way to reliably consume pubsub messages in a non blocking way?
Really appreciate any insights you guys have! :(
could you post the whole puplisher.py, please? It could be the case that call_back(msg) isn't called anymore.
To check whether a client is still subscribed, you can use the command PUBSUB CHANNELS in reds-cli.
Regards, Martin

Signal handler accepts (*args), how do I provide them?

I'm using a library called BACpypes to communicate over network with a PLC. The short version here is that I need to start a BACpypes application in its own thread and then perform read/write to the plc in this separate thread.
For multiple PLC's, there is a processing loop that creates an application (providing the plc ip address), performs read writes on plc using application, kills application by calling BACpypes stop(*args) from the Core module, calls join on the thread, and then moves on to next ip address in the list until we start over again. This works for as many ip addresses (PLCs) as we have, but as soon as we are back at the first ip address (PLC) again, I get the error:
socket.error: [Errno 98] Address already in use
Here is the short code for my thread class, which uses the stop() and run() functions from BACpypes core.
class BACpypeThread(Thread):
def __init__(self, name):
Thread.__init__(self)
Thread.name = name
def run(self):
run()
def stop(self):
stop()
It seems like I'm not correctly killing the application. So, I know stop(*args) is registered as a signal handler according to BACpypes docs. Here is a snippet I pulled from this link http://bacpypes.sourceforge.net/modules/core.html
core.stop(*args)
Parameters: args – optional signal handler arguments
This function is called to stop a BACpypes application. It resets the running boolean value. This function also installed as a signal handler responding to the TERM signal so you can stop a background (deamon) process:
$ kill -TERM 12345
I feel like I need to provide a kill -term signal to make the ip address available again. I don't know how to do that. Here's my question...
1) In this example, 12345 is the process number I believe. How do I figure out that number for my thread?
2) Once I have the number, how do I actually pass the kill -TERM signal to the stop function? I just don't know how to actually write this line of code. So if someone could explain this that would be great.
Thanks for the help!
Before stopping the core, you need to free the socket.
I use :
try:
self.this_application.mux.directPort.handle_close()
except:
self.this_application.mux.broadcastPort.handle_close()
After that I call stop
then thread.join()

Register Event callbacks with pywin32

I am having difficulty figuring out how to receive events using pywin32. I have created code to do some OPC processing. According to the generated binding in the gen_py folder I should be able to register event handlers and it gives me the prototypes I should use... for example:
# Event Handlers
# If you create handlers, they should have the following prototypes:
# def OnAsyncWriteComplete(.......)
So I have written code that implements the handlers that I am interested in but have not the slightest idea how to get them attached to my client and can not seem to find examples that are making sense to me. Below I create my client and then add an object that should have events associated with it...
self.server = win32com.client.gencache.EnsureDispatch(driver)
# I can now call the methods on the server object like so....
new_group = self.server.OPCGroups.Add(group)
I want to attach my handler to the new_group object (perhaps to the self.server?) but I can not seem to understand how I should be doing that.
So my questions are:
How can I attach my handler code for the events? Any examples around I should look at?
Will the handler code have access to attributes stored on the client "self" in this case?
Any help would be greatly appreciated.
After quite a bit, I was able to find a way to do this. What I did was find that I could attach my Event handler class to the group.
self.server = win32com.client.gencache.EnsureDispatch(driver)
# I can now call the methods on the server object like so....
new_group = self.server.OPCGroups.Add(group)
self._events[group] = win32com.client.WithEvents(
new_group, GroupEvent)
Once I had that going it seems to trigger the events, but the events would not run until the end of the script. In order to get it to process the events that were queued up, I call this which seems to trigger the callbacks to execute.
pythoncom.PumpWaitingMessages()
Don't know if it will help anyone else but it seems to work for what I am doing.
Thank's for this, it was very helpful. To extend the question, I found I could simply register the driver:
import win32com
class MyEvents(object): pass
server=win32com.client.gencache.EnsureDispatch(driver)
win32com.client.WithDispatch(server, MyEvents)
I discovered this by performing help(win32com.client.WithEvents)

Creating a CherryPy with ws4py socket heartbeat

I have spent about 5 hours searching for how to do this to no avail.
We are using ws4py on top of cherrypy. At current, when a connection is physically lost (say you turn off your WiFi) the connection will not be terminated untill a message is sent from the server, at which point it detects the dropped line and cleans up the socket.
This is causing us issues, and we need to know sooner if the socket is gone.
the file "websocket.py" in ws4py has a class called "Heartbeat" which looks like exactly what I want, and I believe that an instance is created inside the "WebSocket" if its has a "heartbeat_freq" parameter passed in;
class WebSocket(object):
""" Represents a websocket endpoint and provides a high level interface to drive the endpoint. """
def __init__(self, sock, protocols=None, extensions=None, environ=None, heartbeat_freq=None):
Above is the ws4py ctor, but I cannot find where this code is called from. What I do know is that it is tied up in a CherryPy callback system. Here is what I found;
The above ctor is called from "cherrypyserver.py" in the function;
def upgrade(self, protocols=None, extensions=None, version=WS_VERSION, handler_cls=WebSocket, heartbeat_freq=None):
This function seems to be a callback, as it is called from _cprequest.py in a function
def __call__(self):
"""Run self.callback(**self.kwargs)."""
return self.callback(**self.kwargs)
Now there is a bit more stuff floating around but in honesty I'm kinda lost, and think I am going about this wrong.
From what I can figure out, I need to set the "heartbeat_freq" parameter of the callback, but am not sure where I would set this parameter. The code below is where I specify the "WebSocket" handler class (websocket2.Handler inherits from "WebSocket") that the callback creates an instance of.
rootmap2={
'wsgi.pipeline': [
('validator1', validator),
('validator2', validator),
] ,
'tools.websocket.on': True,
'tools.websocket.handler_cls': websocket2.Handler,
}
I believe that somewhere in this "rootmap" I have to specify the parameter. Does anybody know how to do this.
To clarify, I want my server to create a heartbeat for each peer. I believe this is done by passing in a "heartbeat_freq" value.
At current I am just broadcasting a heartbeat to all, which I dont like the sound of personally
The heartbeat is only activated when the websocket object has its run method executed (usually in a thread as it's blocking). However the CherryPy plugin does not use that code path anymore for various reasons. I'm not yet decided as to what to do with the heartbeat yet because, as it stands, it'd have to be run in its own thread for each websocket. I find that expensive memory-wise. However, if you do need that feature, create an instance of the Heartbeat class with the websocket instance. Simply hold a reference to each Heartbeat instance.

Python XCHAT API question

I am writing a script in XCHAT and from reading other scripts I notice use of return xchat.EAT_ALL in most of them. Here is what the documentation for the XCHAT Python API says:
Callback return constants (EAT_)
When a callback is supposed to return one of the EAT_ macros, it is able control how xchat will proceed after the callback returns. These are the available constants, and their meanings:
EAT_PLUGIN
Don't let any other plugin receive this event.
EAT_XCHAT
Don't let xchat treat this event as usual.
EAT_ALL
Eat the event completely.
EAT_NONE
Let everything happen as usual.
Returning None is the same as returning EAT_NONE.
I am wondering why to do this. I really don't understand what this is saying and there isn't that much documentation for the XCHAT Python API. I am curious as to when to use which one of these.
Just from what you've pasted in:
Certain events occur in XChat, which you can register a function to handle. It is possible for there to be more than one callback function registered for each event - either by plugins or by XChat itself.
So after your function has done whatever it wants to do, it needs to decide whether to allow other callbacks to be triggered as well. As a simple example, let's say you're writing a script which filters incoming messages that have certain words. It's triggered whenever a message is received, and goes something like:
if any(word in swearwords for word in message):
return xchat.EAT_ALL # The 'message received' event stops here
else:
return xchat.EAT_NONE # Let it be handled normally.

Categories