I have two coupled classes DhcpServer and SessionManager. I got the following requirements in my specs that led to that coupling:
DhcpServer must not issue an IP address lease if SessionManager forbids that (e.g. an error occurred while creating a session)
SessionManager must start a session upon creation of a new lease by DhcpServer and destroy a session as soon as that lease expires or gets released explicitly by a client
On the other hand DhcpServer must destroy the lease if SessionManager stopped a corresponding session (e.g. by sysadmin's request)
At first it was tempting to put all the code into a single class. But the responsibilities were distinct, so I split them into two and created two interfaces:
class ISessionObserver(object):
def onSessionStart(**kwargs): pass
def onSessionStop(**kwargs): pass
class IDhcpObserver(object):
def onBeforeLeaseCreate(**kwargs):
"""
return False to cancel lease creation
"""
pass
def onLeaseCreate(**kwargs): pass
def onLeaseDestroy(**kwargs): pass
Then I implemented IDhcpObserver in SessionManager and ISessionObserver in DhcpServer. And that led to coupling. Even though the classes do not depend on each other directly they do depend on the interfaces declared in each other's packages.
Later I want to add another protocol for session initiation leaving SessionManager's logic intact. I don't want it to implement IAnotherProtocolObserver as well.
Also DHCP server as such has nothing to do with my notion of session. And since there's no DHCP protocol implementation for Twisted (which I'm using) I wanted to release it as a separate project that has no dependencies neither on SessionManager nor on its package.
How can I satisfy my spec requirements while keeping the code pieces loosely coupled?
A good way to decouple classes is to use events.
So what you need to do is to "fire" events when something happens. Example: Send an event "session created" when the SessionManager could create a session. Make the DhcpServer listen for that event and prepare a lease when it receives it.
Now all you need is a third class which creates the other two and configures the event listeners.
The beauty of this solution that it keeps everything simple. When you write unit tests, you will always only need one of the classes because all you need is to check whether the correct event has been fired.
Related
I am new to socket programing and SW architecture.
My system must be : a GUI in my laptop using python. There are many embedded systems with same sensors ( GPS, Temperature, pressure ...). Each time you select an embedded system, my program needs to establish a connection with it , I need to show its GPS position and the real time feed of its sensors in the GUI ( For now the GUI is not the problem, I can do with Kivy or Tkinter).
This is how it must function :
In the GUI, there is a field to enter the ID of embedded system and a button to try to connect with it.
When the button is clicked, the program establishes connection and shows GPS, Temperature and pressure in real time continuously until connection is lost.
I was thinking of doing it with this architecture :
A thread to deal with the GUI
Each time a button is clicked and an embedded system is found, an object of a class I created is instantiated.
The class has as attributes :
list GPS ( to store GPS feed)
list temperature ( to store Temperature feed)
list pressure
a thread_socket ( the socket is created in a thread to be a client to the embedded system. So each time an object is instantiated of the class, a separate socket is create )
The class has as methods :
Get_Gps() : Each time this method is called the GPS list attribute is updated
Get_Temperature() / Pressure()
Stop() : When this method is called the embedded system needs to shutdown.
In the socket thread, I have methods such as send_message() and receive_message() to send through TCP/IP the request for getting GPS and sensor data or stopping the system.
On each embedded system I will put a server using python that is set up everytime the system starts.
This way the ID of the system is the ip of the server, And my laptop would be a client, searching for the ip adress when I select a system.
My questions are :
Does this architecture seem alright to you ?
Is it correct to receive real time feed in a list ? for example for the gps.
Each time I find a system I instanciate an object to keep things clean, is this a good way to do it?
Do you see any issues or improvements ?
Thank you in advance,
I think your approach in general is fine.
However, you should keep a few things in mind:
When designing your software, you should first identify the different tasks involved and define separate functional units for each task. This is the concept of separation of concerns.
I also suggest to read a bit on the Model-View-Controller (MVC) pattern: In your case, the model would be your class containing the data structure for the measurements and the business logic (e.g. polling data from a source for example every second until the connection is stopped). The view and the controller might both be located in the GUI (which is absolutely fine).
The GUI is not necessarily an explicit thread, but many frameworks rather work with an event-based concept that lets you define the application's behavior for a given user interaction.
Why do you actually need lists for the measurements? Is there a requirement to keep the history of measurements over a certain period of time? Is this a list that will keep growing and growing or rather a rolling list (e.g. for showing the last n seconds/minutes of measurements in the GUI)? There seems a bit of a contradiction to starting a new class instance with every new connection, because you would obviously loose the contents when you stop the connection and terminate the instance.
Hope this gives you some ideas of how to proceed from there.
I have a python script which is enabling bluetooth discovery & pairing. It using dbus to configure the bluetooth parameter. init code:
self.bus = dbus.SystemBus()
self.bluez_obj = self.bus.get_object('org.bluez', '/org/bluez/hci0')
self.bluetooth_props = dbus.Interface(self.bluez_obj, "org.freedesktop.DBus.Properties")
I have seen that when I want to set the Bluetooth property by
self.bluetooth_props.Set(self.adapter_iface_name, "Discoverable", dbus.Boolean(value))
The method does not return from call.
I am doubting that bus.get_object() call of dbus for this application didn't returned valid object.
How can we check the get_object returned valid dbus object?
[Edit]
The issue started appearing recently, till now it was just working fine. this application started from the init.d during boot, and the discovery & pair is enabled based on input events. get_object & Interface is done in the application init only. after boot, when the input event received the utility is supposed to set timeout for Discovery & pair.
self.bluetooth_props.Set(self.adapter_iface_name, "DiscoverableTimeout", dbus.UInt32(value))
self.bluetooth_props.Set(self.adapter_iface_name, "PairableTimeout", dbus.UInt32(value))
This function does not return from the call which is directing me to doubt on the object created by get_object.
Counter side, If I add delay of 1-3 Seconds before creating the object by get_object than its just work fine. So I guess when I am calling get_object without delay, there is no valid object returned might be because of the remote end(bluez stack i guess) is not ready.
Adding delay resolves the issue but it may appear again as its just work around. I have checked change history, there is no change in init sequence or this application which is creating such issue.
Any suggestions?
Most python windows service examples based on the win32serviceutil.ServiceFramework use the win32event for synchronization.
For example:
http://tools.cherrypy.org/wiki/WindowsService (the example for cherrypy 3.0)
(sorry I dont have the reputation to post more links, but many similar examples can be googled)
Can somebody clearly explain why the win32events are necessary (self.stop_event in the above example)?
I guess its necessary to use the win32event due to different threads calling svcStop and svcRun? But I'm getting confused, there are so many other things happening: the split between python.exe and pythonservice.exe, system vs local threads (?), python GIL..
For the top of PythonService.cpp
PURPOSE: An executable that hosts Python services.
This source file is used to compile 2 discrete targets:
* servicemanager.pyd - A Python extension that contains
all the functionality.
* PythonService.exe - This simply loads servicemanager.pyd, and
calls a public function. Note that PythonService.exe may one
day die - it is now possible for python.exe to directly host
services.
What exactly do you mean by system threads vs local threads? You mean threads created directly from C outside the GIL?
The PythonService.cpp just related the names to callable python objects and a bunch of properties, like the accepted methods.
For example a the accepted controls from the ServiceFramework:
def GetAcceptedControls(self):
# Setup the service controls we accept based on our attributes. Note
# that if you need to handle controls via SvcOther[Ex](), you must
# override this.
accepted = 0
if hasattr(self, "SvcStop"): accepted = accepted | win32service.SERVICE_ACCEPT_STOP
if hasattr(self, "SvcPause") and hasattr(self, "SvcContinue"):
accepted = accepted | win32service.SERVICE_ACCEPT_PAUSE_CONTINUE
if hasattr(self, "SvcShutdown"): accepted = accepted | win32service.SERVICE_ACCEPT_SHUTDOWN
return accepted
I suppose the events are recommended because that way you could interrupt the interpreter from outside the GIL, even if python is in a blocking call from the main thread, e.g.: time.sleep(10) you could interrupt from those points outside the GIL and avoid having an unresponsive service.
Most of the win32 services calls are in between the python c macros:
Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS
It may be that, being examples, they don't have anything otherwise interesting to do in SvcDoRun. SvcStop will be called from another thread, so using an event is just an easy way to do the cross-thread communication to have SvcDoRun exit at the appropriate time.
If there were some service-like functionality that blocks in SvcDoRun, they wouldn't necessarily need the events. Consider the second example in the CherryPy page that you linked to. It starts the web server in blocking mode, so there's no need to wait on an event.
I have spent about 5 hours searching for how to do this to no avail.
We are using ws4py on top of cherrypy. At current, when a connection is physically lost (say you turn off your WiFi) the connection will not be terminated untill a message is sent from the server, at which point it detects the dropped line and cleans up the socket.
This is causing us issues, and we need to know sooner if the socket is gone.
the file "websocket.py" in ws4py has a class called "Heartbeat" which looks like exactly what I want, and I believe that an instance is created inside the "WebSocket" if its has a "heartbeat_freq" parameter passed in;
class WebSocket(object):
""" Represents a websocket endpoint and provides a high level interface to drive the endpoint. """
def __init__(self, sock, protocols=None, extensions=None, environ=None, heartbeat_freq=None):
Above is the ws4py ctor, but I cannot find where this code is called from. What I do know is that it is tied up in a CherryPy callback system. Here is what I found;
The above ctor is called from "cherrypyserver.py" in the function;
def upgrade(self, protocols=None, extensions=None, version=WS_VERSION, handler_cls=WebSocket, heartbeat_freq=None):
This function seems to be a callback, as it is called from _cprequest.py in a function
def __call__(self):
"""Run self.callback(**self.kwargs)."""
return self.callback(**self.kwargs)
Now there is a bit more stuff floating around but in honesty I'm kinda lost, and think I am going about this wrong.
From what I can figure out, I need to set the "heartbeat_freq" parameter of the callback, but am not sure where I would set this parameter. The code below is where I specify the "WebSocket" handler class (websocket2.Handler inherits from "WebSocket") that the callback creates an instance of.
rootmap2={
'wsgi.pipeline': [
('validator1', validator),
('validator2', validator),
] ,
'tools.websocket.on': True,
'tools.websocket.handler_cls': websocket2.Handler,
}
I believe that somewhere in this "rootmap" I have to specify the parameter. Does anybody know how to do this.
To clarify, I want my server to create a heartbeat for each peer. I believe this is done by passing in a "heartbeat_freq" value.
At current I am just broadcasting a heartbeat to all, which I dont like the sound of personally
The heartbeat is only activated when the websocket object has its run method executed (usually in a thread as it's blocking). However the CherryPy plugin does not use that code path anymore for various reasons. I'm not yet decided as to what to do with the heartbeat yet because, as it stands, it'd have to be run in its own thread for each websocket. I find that expensive memory-wise. However, if you do need that feature, create an instance of the Heartbeat class with the websocket instance. Simply hold a reference to each Heartbeat instance.
I'm using Pywin32 to communicate with Bloomberg through its COM-library. This works rather good! However, I have stumbeled upona a problem which I consider pretty complex. If I set the property QueueEvents of the Com object to True I the program fails. In the documentation they have a section regarding this,
If your QueueEvents property is set to
True and you are performing low-level
instantiation of the data control
using C++, then in your data event
handler (invoke) you will be required
to initialize pvarResult by calling
the VariantInit() function. This will
prevent your application from
receiving duplicate ticks.
session = win32com.client.DispatchWithEvents(comobj, EventHandler)
session.QueueEvents = True <-- this trigger some strange "bugs" in execution
if "pvarResult" is not initialized
I think I understand the theoretical aspects here, you need to initialize a datastructure before the comobject can write to it. However, how do you do this from Pywin32? That I have no clue about, and would appreciate any ideas or pointers(!) to how this can be done.
None of the tips below helped. My program doesn't throw an exception, it just returns the same message from the COM object again and again and again...
From the documentation:
If your QueueEvents property is set to
True and you are performing low-level
instantiation of the data control
using C++, then in your data event
handler (invoke) you will be required
to initialize pvarResult by calling
the VariantInit() function. This will
prevent your application from
receiving duplicate ticks. If this
variable is not set then the data
control assumes that you have not
received data yet, and it will then
attempt to resend it. In major
containers, such as MFC and Visual
Basic, this flag will automatically be
initialized by the container. Keep in
mind that this only pertains to
applications, which set the
QueueEvents property to True.
I'm not sure if this will help for your issue, but to have working COM events in Python you shouldn't forget about:
setting COM apartment to free
threaded at the beginning of script
file. This could be done using
following lines
import sys
sys.coinit_flags = 0
generating wrapper for com library before calling first DispatchWithEvents
from win32com.client.makepy import GenerateFromTypeLibSpec
GenerateFromTypeLibSpec("ComLibName 1.0 Type Library")
If you could post how the program fails (COM object fails or maybe python trows some exceptions) maybe I could advice more.