Creating a CherryPy with ws4py socket heartbeat - python

I have spent about 5 hours searching for how to do this to no avail.
We are using ws4py on top of cherrypy. At current, when a connection is physically lost (say you turn off your WiFi) the connection will not be terminated untill a message is sent from the server, at which point it detects the dropped line and cleans up the socket.
This is causing us issues, and we need to know sooner if the socket is gone.
the file "websocket.py" in ws4py has a class called "Heartbeat" which looks like exactly what I want, and I believe that an instance is created inside the "WebSocket" if its has a "heartbeat_freq" parameter passed in;
class WebSocket(object):
""" Represents a websocket endpoint and provides a high level interface to drive the endpoint. """
def __init__(self, sock, protocols=None, extensions=None, environ=None, heartbeat_freq=None):
Above is the ws4py ctor, but I cannot find where this code is called from. What I do know is that it is tied up in a CherryPy callback system. Here is what I found;
The above ctor is called from "cherrypyserver.py" in the function;
def upgrade(self, protocols=None, extensions=None, version=WS_VERSION, handler_cls=WebSocket, heartbeat_freq=None):
This function seems to be a callback, as it is called from _cprequest.py in a function
def __call__(self):
"""Run self.callback(**self.kwargs)."""
return self.callback(**self.kwargs)
Now there is a bit more stuff floating around but in honesty I'm kinda lost, and think I am going about this wrong.
From what I can figure out, I need to set the "heartbeat_freq" parameter of the callback, but am not sure where I would set this parameter. The code below is where I specify the "WebSocket" handler class (websocket2.Handler inherits from "WebSocket") that the callback creates an instance of.
rootmap2={
'wsgi.pipeline': [
('validator1', validator),
('validator2', validator),
] ,
'tools.websocket.on': True,
'tools.websocket.handler_cls': websocket2.Handler,
}
I believe that somewhere in this "rootmap" I have to specify the parameter. Does anybody know how to do this.
To clarify, I want my server to create a heartbeat for each peer. I believe this is done by passing in a "heartbeat_freq" value.
At current I am just broadcasting a heartbeat to all, which I dont like the sound of personally

The heartbeat is only activated when the websocket object has its run method executed (usually in a thread as it's blocking). However the CherryPy plugin does not use that code path anymore for various reasons. I'm not yet decided as to what to do with the heartbeat yet because, as it stands, it'd have to be run in its own thread for each websocket. I find that expensive memory-wise. However, if you do need that feature, create an instance of the Heartbeat class with the websocket instance. Simply hold a reference to each Heartbeat instance.

Related

Erlang like msgbox event loop

I want to implement Erlang like messaging, unless already exists.
The idea is to create multiprocess application (I'm using Ray)
I can imagine how to do the send/recv :
#ray.remote
class Module:
def recv(self, folder, msg ) :
if folder not in self.inbox : self.inbox[folder] = deque()
self.inbox[folder].push(msg)
def send(self, mod, folder, msg): mod.recv(folder,msg)
You call .send() which remotely calls the target module .recv() method
my problem is i dont know how to do the internal eventloop that REACT on messages.
It has to be lightweight too, because it runs in every process.
One idea is while-loop with sleep, but it seems inefficient !!
Probably, when msg arrives it has to trigger some registered FILTER-HOOK if message matches ? So may be no event loop needed but just routines triggered by FILTER !!!
What i did for now is trigger a check routine every time i get a message, which goes trough rules defined as (key:regex, method-to-call) filters

How to check bus.get_object returned object is valid for dbus in python?

I have a python script which is enabling bluetooth discovery & pairing. It using dbus to configure the bluetooth parameter. init code:
self.bus = dbus.SystemBus()
self.bluez_obj = self.bus.get_object('org.bluez', '/org/bluez/hci0')
self.bluetooth_props = dbus.Interface(self.bluez_obj, "org.freedesktop.DBus.Properties")
I have seen that when I want to set the Bluetooth property by
self.bluetooth_props.Set(self.adapter_iface_name, "Discoverable", dbus.Boolean(value))
The method does not return from call.
I am doubting that bus.get_object() call of dbus for this application didn't returned valid object.
How can we check the get_object returned valid dbus object?
[Edit]
The issue started appearing recently, till now it was just working fine. this application started from the init.d during boot, and the discovery & pair is enabled based on input events. get_object & Interface is done in the application init only. after boot, when the input event received the utility is supposed to set timeout for Discovery & pair.
self.bluetooth_props.Set(self.adapter_iface_name, "DiscoverableTimeout", dbus.UInt32(value))
self.bluetooth_props.Set(self.adapter_iface_name, "PairableTimeout", dbus.UInt32(value))
This function does not return from the call which is directing me to doubt on the object created by get_object.
Counter side, If I add delay of 1-3 Seconds before creating the object by get_object than its just work fine. So I guess when I am calling get_object without delay, there is no valid object returned might be because of the remote end(bluez stack i guess) is not ready.
Adding delay resolves the issue but it may appear again as its just work around. I have checked change history, there is no change in init sequence or this application which is creating such issue.
Any suggestions?

Register Event callbacks with pywin32

I am having difficulty figuring out how to receive events using pywin32. I have created code to do some OPC processing. According to the generated binding in the gen_py folder I should be able to register event handlers and it gives me the prototypes I should use... for example:
# Event Handlers
# If you create handlers, they should have the following prototypes:
# def OnAsyncWriteComplete(.......)
So I have written code that implements the handlers that I am interested in but have not the slightest idea how to get them attached to my client and can not seem to find examples that are making sense to me. Below I create my client and then add an object that should have events associated with it...
self.server = win32com.client.gencache.EnsureDispatch(driver)
# I can now call the methods on the server object like so....
new_group = self.server.OPCGroups.Add(group)
I want to attach my handler to the new_group object (perhaps to the self.server?) but I can not seem to understand how I should be doing that.
So my questions are:
How can I attach my handler code for the events? Any examples around I should look at?
Will the handler code have access to attributes stored on the client "self" in this case?
Any help would be greatly appreciated.
After quite a bit, I was able to find a way to do this. What I did was find that I could attach my Event handler class to the group.
self.server = win32com.client.gencache.EnsureDispatch(driver)
# I can now call the methods on the server object like so....
new_group = self.server.OPCGroups.Add(group)
self._events[group] = win32com.client.WithEvents(
new_group, GroupEvent)
Once I had that going it seems to trigger the events, but the events would not run until the end of the script. In order to get it to process the events that were queued up, I call this which seems to trigger the callbacks to execute.
pythoncom.PumpWaitingMessages()
Don't know if it will help anyone else but it seems to work for what I am doing.
Thank's for this, it was very helpful. To extend the question, I found I could simply register the driver:
import win32com
class MyEvents(object): pass
server=win32com.client.gencache.EnsureDispatch(driver)
win32com.client.WithDispatch(server, MyEvents)
I discovered this by performing help(win32com.client.WithEvents)

How to avoid class coupling when the specs insist on it

I have two coupled classes DhcpServer and SessionManager. I got the following requirements in my specs that led to that coupling:
DhcpServer must not issue an IP address lease if SessionManager forbids that (e.g. an error occurred while creating a session)
SessionManager must start a session upon creation of a new lease by DhcpServer and destroy a session as soon as that lease expires or gets released explicitly by a client
On the other hand DhcpServer must destroy the lease if SessionManager stopped a corresponding session (e.g. by sysadmin's request)
At first it was tempting to put all the code into a single class. But the responsibilities were distinct, so I split them into two and created two interfaces:
class ISessionObserver(object):
def onSessionStart(**kwargs): pass
def onSessionStop(**kwargs): pass
class IDhcpObserver(object):
def onBeforeLeaseCreate(**kwargs):
"""
return False to cancel lease creation
"""
pass
def onLeaseCreate(**kwargs): pass
def onLeaseDestroy(**kwargs): pass
Then I implemented IDhcpObserver in SessionManager and ISessionObserver in DhcpServer. And that led to coupling. Even though the classes do not depend on each other directly they do depend on the interfaces declared in each other's packages.
Later I want to add another protocol for session initiation leaving SessionManager's logic intact. I don't want it to implement IAnotherProtocolObserver as well.
Also DHCP server as such has nothing to do with my notion of session. And since there's no DHCP protocol implementation for Twisted (which I'm using) I wanted to release it as a separate project that has no dependencies neither on SessionManager nor on its package.
How can I satisfy my spec requirements while keeping the code pieces loosely coupled?
A good way to decouple classes is to use events.
So what you need to do is to "fire" events when something happens. Example: Send an event "session created" when the SessionManager could create a session. Make the DhcpServer listen for that event and prepare a lease when it receives it.
Now all you need is a third class which creates the other two and configures the event listeners.
The beauty of this solution that it keeps everything simple. When you write unit tests, you will always only need one of the classes because all you need is to check whether the correct event has been fired.

Specifying timeout while fetching/putting data in memcache (django)

I have a django based http server and I use django.core.cache.backends.memcached.MemcachedCache as the client library to access memcache. I want to know whether we can set a timeout or something (say 500ms.) so that the call to memcached returns False if it is not able to access the cache for 500ms. and we make the call to the DB. Is there any such setting to do that?
Haven't tried this before, but you may be able to use threading and set up a timeout for the function call to cache. As an example, ignore the example provided in the main body at this link, but look at Jim Carroll's comment:
http://code.activestate.com/recipes/534115-function-timeout/
Adapted for something you might use:
from threading import Timer
import thread, time, sys
def timeout():
thread.interrupt_main()
try:
Timer(0.5, timeout).start()
cache.get(stuff)
except:
print "Use a function to grab it from the database!"
I don't have time to test it right now, but my concern would be whether Django itself is threaded, and if so, is interrupting the main thread what you really want to do? Either way, it's a potential starting point. I did look for a configuration option that would allow for this and found nothing.

Categories