I would like to create a single "interface" class (in interface.py) through which I can access an underlying class's functionality, where the class accessed is dependent on a xml config_file setting.
Taking connection via ssh, or ftp as an example.
I'd have a variable set in my config file such as
"INTERFACE": "ssh"
Then I'd expect to have code that looks something like this:
# file = interface.py
class Interface(?)
def connect(self, *args, **kwargs):
return self
# file = ssh.py
class SSH(?)
def connect(self, *args, **kwargs):
# Setup a connection
connection = paramiko.SSHClient()
return connection
#file = ftp.py
class FTP(?)
def connect(self, *args, **kwargs):
# Setup a connection
return connection
# And in my calling code I would just like a generic call e.g.
from path.interface import Interface
foo = Interface()
foo.connect(bar) # Where "INTERFACE" : "ssh"
Then the SSH class would (override?) execute its code defined in its version of connect().
If I then change the config setting to config.INTERFACE = "ftp", the same call would "find its way" to the ftp class to establish an ftp connection.
Ideally I'd be able to flip between the different version of connect() with my code, simply by setting:
config.INTERFACE = "ssh"
config.INTERFACE = "ftp"
I assume this isn't some unachievable thing? I don't even know what to google to find out how to do this! Is this overriding?
Any advice would be gratefully received.
Just a topic to google would be a starting point. :)
A standard pattern of doing this is something like this:
from abc import abstractmethod, ABCMeta
class Interface(metaclass=ABCMeta):
#abstractmethod
def upload(self):
raise NotImplementedError('Must implement upload in subclasses')
class SSH(Interface):
def upload(self):
# ssh implementation
class FTP(Interface):
def upload(self):
# ftp implementation
def InterfaceForConfigurationValue(interface_value):
if interface_value == 'ssh':
return SSH()
if interface_value == 'ftp':
return FTP()
raise NotImplementedError('Interface not available for value %s' % (
interface_value,))
Interface is the abstract base class defining the interface you want to use. Useless on its own, it needs concrete subclasses to implement it.
Note that for this to be a useful abstraction and worth the effort, you want to make this more tailored to you applications and provide higher level APIs, such as upload() or get(), or else it's a bit pointless, and you may find it's hard to generalize to different protocols.
SSH and FTP both override the upload() method of their superclass (Interface) and so you can call upload() on your Interface instance without worrying about which particular subclass it is.
InterfaceForConfigurationValue is a factory function that gives you the correct interface subclass based on the configuration option.
In general the most important thing is to ensure that nothing about Interface, SSH or FTP knows or cares what is in the config file, because you may wish to use them elsewhere with a different config system. They just know what they need to do. You add a factory function which is small and contains the knowledge of how to translate your config into a subclass.
Note: your factory function doesn't have to be a global function. Often you'll find there are lots of similar bits of linking you need to do between your config system and your code, so you may want to use a class and have the factory as a method on the class. A subclass of ConfigParser is often a good option.
Related
Is it possible to nest an arbitrary number Singleton classes within a Singleton class in Python?
There's no problem in changing my approach to solving this issue if a simpler alternative exists. I am just using the "tools in my toolset", if you will. I'm simulating some larger processes so bear with me if it seems a bit far-fetched.
An arbitrary number of gRPC servers can be started up and each server will be listening on a different port. So for a client to communicate with these servers, separate channels and thus separate stubs will need to be created for a client to communicate to a given server.
You could just create a new channel and a new stub every time a client needs to make a request to a server, but I am trying to incorporate some best practices and reuse the channels and stubs. My idea is to create a Singleton class that is comprised of Singleton subclasses that house a channel and stub pair as instance attributes. I would have to build the enclosing class in a way that allows me to add additional subclasses whenever needed, but I have never seen this type of thing done before.
The advantage to this approach is that any module that instantiates the main Singleton class will have access to the existing state of the channel and stub pairs, without having to recreate anything.
I should note that I already initialize channels and stubs from within Singleton classes and can reuse them with no problem. But the main goal here is to create a data structure that allows me to reuse/share a variable amount of gRPC channel and stub pairs.
The following is code for reusing the gRPC Channel object; the stubs are built in a very similar way, only difference is they accept the channel as an arguement.
class gRPCChannel(object):
_instance, _channel = None, None
port: int = 50051
def __new__(cls):
"""Subsequent calls to instance() return the singleton without repeating the initialization step"""
if cls._instance is None:
cls._instance = super(gRPCChannel, cls).__new__(cls)
# The following is any initialization that needs to happen for the channel object
cls._channel = grpc.insecure_channel(f'localhost:{cls.port}', options=(('grpc.enable_http_proxy', 0),))
return cls._channel
def __exit__(self, cls):
cls._channel.close()
I think this is simpler than the "singleton pattern":
# grpc.py
import functools as ft
class GrpcChannel:
pass # normal class, no funny __new__ overload business
#ft.lru_cache
def channel():
return GrpcChannel()
Usage:
import grpc
channel = grpc.channel()
assert grpc.channel() is channel
If you really want it all namespaced under the class (IMO no reason to, takes more typing & syntax for no extra benefit), then:
class GrpcChannel:
#classmethod
#ft.lru_cache
def instance(cls):
return cls()
# usage
assert grpc.GrpcChannel.instance() is grpc.GrpcChannel.instance()
I need some help in terms of 'pythonic' way of handling a specific scenario.
I'm writing an Ssh class (wraps paramiko) that provides the capability to connect to and executes commands on a device under test (DUT) over ssh.
class Ssh:
def connect(some_params):
# establishes connection
def execute_command(command):
# executes command and returns response
def disconnect(some_params):
# closes connection
Next, I'd like to create a Dut class that represents my device under test. It has other things, besides capability to execute commands on the device over ssh. It exposes a wrapper for command execution that internally invokes the Ssh's execute_command. The Ssh may change to something else in future - hence the wrapper.
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
def execute_command(command)
return self.ssh.execute_command(command)
Next, the device supports a custom command line interface for device under test. So, a class that accepts a DUT object as an input and exposes a method to execute the customised command.
def CustomCli:
def __init__(dut_object):
self.dut = dut_object
def _customize(command):
# return customised command
def execute_custom_command(command):
return self.dut.execute_command(_customize(command))
Each of the classes can be used independently (CustomCli would need a Dut object though).
Now, to simplify things for user, I'd like to expose a wrapper for CustomCli in the Dut class. This'll allow the creator of the Dut class to exeute a simple or custom command.
So, I modify the Dut class as below:
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
self.custom_cli = Custom_cli(self) ;# how to avoid this circular reference in a pythonic way?
def execute_command(command)
return self.ssh.execute_command(command)
def execute_custom_command(command)
return self.custom_cli.execute_custom_command(command)
This will work, I suppose. But, in the process I've created a circular reference - Dut is pointing to CustomCli and CustomCli has a reference to it's creator Dut instance. This doesn't seem to be the correct design.
What's the best/pythonic way to deal with this?
Any help would be appreciated!
Regards
Sharad
In general, circular references aren't a bad thing. Many programs will have them, and people just don't notice because there's another instance in-between like A->B->C->A. Python's garbage collector will properly take care of such constructs.
You can make circular references a bit easier on your conscience by using weak references. See the weakref module. This won't work in your case, however.
If you want to get rid of the circular reference, there are two way:
Have CustomCLI inherit from Dut, so you end up with just one instance. You might want to read up on Mixins.
class CLIMerger(Dut):
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIMixin(object):
# inherit from object, won't work on its own
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIDut(Dut, CLIMixin):
# now the mixin "works", but still could enhance other Duts the same way
pass
The Mixin is advantageous if you need several cases of merging a CLI and Dut.
Have an explicit interface class that combines CustomCli and Dut.
class DutCLI(object):
def __init__(self, *bla, **blah):
self.dut = Dut(*bla, **blah)
self.cli = CustomCLI(self.dut)
This requires you to write boilerplate or magic to forward every call from DutCLI to either dut or cli.
I am currently re-writing some code that uses python's select.select() method, but these will only return socket objects in which I have to go and manually match that socket to the socket in a class that was put in there under __init__. the pseudo-code for that would basically be [classobject for classobject in classList if SocketFromSelection == class.socketobject][0] (which I'm pretty much using).
I found in python documentation that in select.select(), "You may also define a wrapper class yourself, as long as it has an appropriate fileno() method (that really returns a file descriptor, not just a random integer)."
My question is, how would I attach a fileno() method in a class so that I can pass a sequence of these classes into select.select() so that it returns the classes and not just the sockets? Also, would this run on windows? If not, is there a better way to match the socket to the socket in a class in a list of classes?
From the code you included in your question, it sounds like you have a class that contains a socket inside of it (as the socketobject attribute). In this case, you can make your wrapper object selectable by proxying the socket's fileno method on your wrapper:
def SocketWrapper(object):
def __init__(self, socket):
self.socketobj = socket # use whatever you are already doing
def fileno(self):
return self.socketobj.fileno()
Now you can pass instances of SocketWrapper directly to select, rather than passing the sockets and then later having to sort out which socket corresponds to which instance.
In the code below the User class needs to access a function get_user inside an instance of WebService class, as that contains other functions required for authentication with the web server (last.fm). Actual code is here.
class WebService:
def __init__(self, key):
self.apikey = key
def get_user(self, name):
pass # Omitted
class User:
def __init__(self, name, webservice):
self.name = name
self.ws = webservice
def fill_profile(self):
data = self.ws.GetUser(self.name)
# Omitted
The problem is that a reference needs to be held inside every ´User´. Is there another way of doing this? Or is it just me overcomplicating things, and this is how it actually works in the real world?
As requested:
As to handling things like get_top_albums and get_friends, that depends on how you want to model the system. If you don't want to cache the data locally, I'd say just call the service each time with a user ID. If you do want to cache the data locally, you could pass a User object to the method in WebService, then have the method populate the members of the User. You do have to make a design decision though to either have a WebService and a User (what would probably be best), or just a UserWebService.
You can certainly make the reference a static variable, if the web service object is the same for all users.
The syntax is:
class User:
webservice = ...
...
You will then even be able to access it from User instances, but not to assign to it that way, that would require User.webservice syntax.
You are also getting good design alternatives suggested in the comments.
I tried to manipulate the __mro__ but it it read-only
The use case is as follow:
The Connection object created from pyodbc (a DBAPI) used to provide a property called 'autocommit'. Lately I have wrapped a SQLAlchemy db connection pool around the pyodbc for better resource management. The new db pool will return a _ConnectionFairy, a connection proxy class, which no longer exposes the autocommit property.
I would very much want to leave the thrid party code alone. So inheritance of _ConnectionFairy is not really an option (I might need to override the Pool class to change how it creates a connection proxy. For source code, please see here)
A rather not-so elegant solution is to change all occurance of
conn.autocommit = True
to
# original connection object is accessible via .connection
conn.connection.autocommit = True
So, I would like to know if it is possible at all to inject a set of getter, setter and property to an instance of _ConnectionFairy
You can "extend" almost any class using following syntax:
def new_func(self, param):
print param
class a:
pass
a.my_func = new_func
b = a()
b.my_func(10)
UPDATE
If you want to create some kind of wrappers for some methods you can use getattr and setattr to save original method and replace it with your implementation. I've done it in my project but in a bit different way:
Here is an example:
class A:
def __init__(self):
setattr(self, 'prepare_orig', getattr(self,'prepare'))
setattr(self, 'prepare', getattr(self,'prepare_wrapper'))
def prepare_wrapper(self,*args,**kwargs):
def prepare_thread(*args,**kwargs):
try:
self.prepare_orig(*args,**kwargs)
except:
print "Unexpected error:", sys.exc_info()[0]
t = threading.Thread(target=prepare_thread, args=args, kwargs=kwargs)
t.start()
def prepare(self):
pass
The idea of this code that other developer can just implement prepare method in derived classed and it will be executed in the background. It is not what you asked but I hope it will help you in some way.