Is it possible to nest an arbitrary number Singleton classes within a Singleton class in Python?
There's no problem in changing my approach to solving this issue if a simpler alternative exists. I am just using the "tools in my toolset", if you will. I'm simulating some larger processes so bear with me if it seems a bit far-fetched.
An arbitrary number of gRPC servers can be started up and each server will be listening on a different port. So for a client to communicate with these servers, separate channels and thus separate stubs will need to be created for a client to communicate to a given server.
You could just create a new channel and a new stub every time a client needs to make a request to a server, but I am trying to incorporate some best practices and reuse the channels and stubs. My idea is to create a Singleton class that is comprised of Singleton subclasses that house a channel and stub pair as instance attributes. I would have to build the enclosing class in a way that allows me to add additional subclasses whenever needed, but I have never seen this type of thing done before.
The advantage to this approach is that any module that instantiates the main Singleton class will have access to the existing state of the channel and stub pairs, without having to recreate anything.
I should note that I already initialize channels and stubs from within Singleton classes and can reuse them with no problem. But the main goal here is to create a data structure that allows me to reuse/share a variable amount of gRPC channel and stub pairs.
The following is code for reusing the gRPC Channel object; the stubs are built in a very similar way, only difference is they accept the channel as an arguement.
class gRPCChannel(object):
_instance, _channel = None, None
port: int = 50051
def __new__(cls):
"""Subsequent calls to instance() return the singleton without repeating the initialization step"""
if cls._instance is None:
cls._instance = super(gRPCChannel, cls).__new__(cls)
# The following is any initialization that needs to happen for the channel object
cls._channel = grpc.insecure_channel(f'localhost:{cls.port}', options=(('grpc.enable_http_proxy', 0),))
return cls._channel
def __exit__(self, cls):
cls._channel.close()
I think this is simpler than the "singleton pattern":
# grpc.py
import functools as ft
class GrpcChannel:
pass # normal class, no funny __new__ overload business
#ft.lru_cache
def channel():
return GrpcChannel()
Usage:
import grpc
channel = grpc.channel()
assert grpc.channel() is channel
If you really want it all namespaced under the class (IMO no reason to, takes more typing & syntax for no extra benefit), then:
class GrpcChannel:
#classmethod
#ft.lru_cache
def instance(cls):
return cls()
# usage
assert grpc.GrpcChannel.instance() is grpc.GrpcChannel.instance()
Related
Is this good Python practice?
import threading
import Queue
class Poppable(threading.Thread):
def __init__(self):
super(Poppable, self).__init__()
self._q = Queue.Queue()
# provide a limited subset of the Queue interface to clients
self.qsize = self._q.qsize
self.get = self._q.get
def run(self):
# <snip> -- do stuff that puts new items onto self._q
# this is why clients don't need access to put functionality
Does this approach of "promoting" member's functions up to the containing class's interface violate the style, or Zen, of Python?
Mainly I'm trying to contrast this approach with the more standard one that would involve declaring wrapper functions normally:
def qsize(self):
return self._q.qsize()
def get(self, *args):
return self._q.get(*args)
I don't think that is Python specific. In general, this is a good OOP practice. You expose just the functions you need the client to know, hiding the internals of the contained queue. This is a typical approach when wrapping an object, and totally compliant with principle of least knowledge.
If, instead of self.qsize the client had to call self._q.qsize, you cannot easily change _q with a different data type, which does not have a qsize method if that is needed later. So, your approach, makes the object more open to possible future changes.
I need some help in terms of 'pythonic' way of handling a specific scenario.
I'm writing an Ssh class (wraps paramiko) that provides the capability to connect to and executes commands on a device under test (DUT) over ssh.
class Ssh:
def connect(some_params):
# establishes connection
def execute_command(command):
# executes command and returns response
def disconnect(some_params):
# closes connection
Next, I'd like to create a Dut class that represents my device under test. It has other things, besides capability to execute commands on the device over ssh. It exposes a wrapper for command execution that internally invokes the Ssh's execute_command. The Ssh may change to something else in future - hence the wrapper.
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
def execute_command(command)
return self.ssh.execute_command(command)
Next, the device supports a custom command line interface for device under test. So, a class that accepts a DUT object as an input and exposes a method to execute the customised command.
def CustomCli:
def __init__(dut_object):
self.dut = dut_object
def _customize(command):
# return customised command
def execute_custom_command(command):
return self.dut.execute_command(_customize(command))
Each of the classes can be used independently (CustomCli would need a Dut object though).
Now, to simplify things for user, I'd like to expose a wrapper for CustomCli in the Dut class. This'll allow the creator of the Dut class to exeute a simple or custom command.
So, I modify the Dut class as below:
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
self.custom_cli = Custom_cli(self) ;# how to avoid this circular reference in a pythonic way?
def execute_command(command)
return self.ssh.execute_command(command)
def execute_custom_command(command)
return self.custom_cli.execute_custom_command(command)
This will work, I suppose. But, in the process I've created a circular reference - Dut is pointing to CustomCli and CustomCli has a reference to it's creator Dut instance. This doesn't seem to be the correct design.
What's the best/pythonic way to deal with this?
Any help would be appreciated!
Regards
Sharad
In general, circular references aren't a bad thing. Many programs will have them, and people just don't notice because there's another instance in-between like A->B->C->A. Python's garbage collector will properly take care of such constructs.
You can make circular references a bit easier on your conscience by using weak references. See the weakref module. This won't work in your case, however.
If you want to get rid of the circular reference, there are two way:
Have CustomCLI inherit from Dut, so you end up with just one instance. You might want to read up on Mixins.
class CLIMerger(Dut):
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIMixin(object):
# inherit from object, won't work on its own
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIDut(Dut, CLIMixin):
# now the mixin "works", but still could enhance other Duts the same way
pass
The Mixin is advantageous if you need several cases of merging a CLI and Dut.
Have an explicit interface class that combines CustomCli and Dut.
class DutCLI(object):
def __init__(self, *bla, **blah):
self.dut = Dut(*bla, **blah)
self.cli = CustomCLI(self.dut)
This requires you to write boilerplate or magic to forward every call from DutCLI to either dut or cli.
I have a small pyramid web service.
I have also a python class that creates an index of items and methods to search fast across them. Something like:
class MyCorpus(object):
def __init__(self):
self.table = AwesomeDataStructure()
def insert(self):
self.table.push_back(1)
def find(self, needle):
return self.table.find(needle)
I would like to expose the above class to my api.
I can create only one instance of that class (memory limit).
So I need to be able to instantiate this class before the server starts.
And my threads should be able to access it.
I also need some locking mechanism(conccurrent inserts are not supported).
What is the best way to achieve that?
Add an instance of your class to the global application registry during your Pyramid application's configuration:
config.registry.mycorpus = MyCorpus()
and later, for example in your view code, access it through a request:
request.registry.mycorpus
You could also register it as a utility with Zope Component Architecture using registry.registerUtility, but you'd need to define what interface MyCorpus provides etc., which is a good thing in the long run. Either way having a singleton instance as part of the registry makes testing your application easier; just create a configuration with a mock corpus.
Any locking should be handled by the instance itself:
from threading import Lock
class MyCorpus(object):
def __init__(self, Lock=Lock):
self.table = AwesomeDataStructure()
self.lock = Lock()
...
def insert(self):
with self.lock:
self.table.push_back(1)
Any global variable is shared between threads in Python, so this part is really easy: "... create only one instance of that class ... before the server starts ... threads should be able to access it":
corpus = MyCorpus() # in global scope in any module
Done! Then import the instance from anywhere and call your class' methods:
from mydata import corpus
corpus.do_stuff()
No need for ZCA, plain pythonic Python :)
(the general approach of keeping something large and very database-like within the webserver process feels quite suspicious though, I hope you know what you're doing. I mean - persistence? locking? sharing data between multiple processes? Redis, MongoDB and 1001 other database products have those problems solved)
I am currently re-writing some code that uses python's select.select() method, but these will only return socket objects in which I have to go and manually match that socket to the socket in a class that was put in there under __init__. the pseudo-code for that would basically be [classobject for classobject in classList if SocketFromSelection == class.socketobject][0] (which I'm pretty much using).
I found in python documentation that in select.select(), "You may also define a wrapper class yourself, as long as it has an appropriate fileno() method (that really returns a file descriptor, not just a random integer)."
My question is, how would I attach a fileno() method in a class so that I can pass a sequence of these classes into select.select() so that it returns the classes and not just the sockets? Also, would this run on windows? If not, is there a better way to match the socket to the socket in a class in a list of classes?
From the code you included in your question, it sounds like you have a class that contains a socket inside of it (as the socketobject attribute). In this case, you can make your wrapper object selectable by proxying the socket's fileno method on your wrapper:
def SocketWrapper(object):
def __init__(self, socket):
self.socketobj = socket # use whatever you are already doing
def fileno(self):
return self.socketobj.fileno()
Now you can pass instances of SocketWrapper directly to select, rather than passing the sockets and then later having to sort out which socket corresponds to which instance.
I discovered that the existence and use of metaclasses can save you from a lot code-writing by providing an elegant handle on the process of class creation. I use this in my application, where several interacting servers are instantiated. To elaborate:
Each device instantiates a server class specific to its operation, which is a subclass of (a subclass of...) ulitmately this one BaseServer class. Now, some device servers need a ThreadedTCPserver, and some need a SimpleTCPServer (module: socketserver). They cannot all derive from the same class because using the ThreadingMixin overrides the behavior of the SimpleTCPServer.
To handle this dynamic class configuration, I created a MetaServerType, which chooses the baseclasses for BaseServer as (SimpleTCPServer,) or as (ThreadedTCPServer,) --> producing my desired result of dynamically configured server classes! (Woo hoo)
Now, here comes my question:
I would like to use a configuration file where parameters are stored, and these parameters are used by default by the MetaServerType. For example: config.default_loglevel, or config.default_handler etc. And individual servers can be overriden (from command-line or otherwise) according to the metaclass specifications.
It is good design practice to have only one instance of the configuration object being passed through the program-flow? One way to have this is to initialize the config object in the class-body of the metaclass -- but my program-flow begins elsewhere, and this means that the metaclass is called several times thus producing various instances of config. It appears that metaclass is called at import time (?)
So a detailed answer would be very welcome to:
How can one supply metaclasses with configuration info?
What is a good way to have a single config instance be passed through the program-flow, to be edited, updated and perhaps eventually written?
Can the input arguments to metaclass be somehow extended beyond the Metaclass.__new__(meta, name, bases, attrs)?
Bonus question: Does this move us one step closer to a finite-state machine (of servers) so that the state (not the instances) can be 'paused' or 'resumed'?
1 - How can one supply metaclasses with configuration info?
There are a couple of ways to do that - since your metaclasses live in their own module
(and yes, the module is executed once at import time, regardless of how many times it is imported in the same application), a nice way to configure them would be to have a callable object (either a class or function on the same module), that would setup "global variables" that would be used for configuration.
Despite their bad reputation due to C where the name "global" originates, global variables in Python are actually "module" variables: that means that all the functions (including methods) in that module can access these variables. Functions or code in other modules would have to prefix the module name for that.
So a function like:
def configure_servers(p1, p2,...):
global opt1, opt2, ...
opt1 = p1
opt2 = p2
(...)
Could be called from your application entry-point, before the server instances are created. (Of course, you could pass a config-file path to be read instead of p1, p2, ...)
2 - What is a good way to have a SINGLE config instance be passed
through the program-flow, to be
edited, updated and perhaps eventually
written?
A global (module) variable name on the metaclass module could be read by all of them, and it could be associated with a complex configuration object. Maybe the existence of a "config" function like the one above can render this question obsolete.
But in case you really need a "singleton" object, that is, an object of which there is just one instance, you can do it the easy way: Have a single class on the metaclass dictionary, and pass that class around, instead of an instance of it. Better, and cleaner if you have a dictionary instead of a class.
If you need to create a "real" singleton object, you should a class and override the __new__ method in it so that it always returns the first created instance -
Example:
class Singleton(object):
_instance = None
def __new__(cls, *args, **kw):
if cls._instance is not None:
return cls._instance
self = object.__new__(cls, *args, **kw)
cls._instance = self
return self
3 - Can the input arguments to metaclass be somehow extended beyond
the Metaclass.new(meta, name, bases, attrs) ?
Not taking advantage of the language syntax.
I mean, it is always possible to call the metaclass as a normal Python call, but that would prevent you from using the language syntax to describe your class: you'd need to define the class body as a dictionary to pass in as attrs for the call.
For example, to create a derived exception class, one could do:
MyException = type("MyException", (Exception, ), {})
Instead of:
class MyException(Exception):
pass
The usual way of passing additional information to the metaclass is using attributes with fixed names on the class body. The metaclass then checks these attributes inside attrs and uses those. It can choose to keep then in the resulting class, or delete them from the attrs dict at this point.
If the information you need to pass the metaclass is only known at runtime, these attributes can point to other (module-level) variables, or contain Python expressions that are evaluated at class creation time.
mod_server_type = "TCP"
class YAServer(ParentServer):
__metaclass__ = ServerMetaBase
_sever_type = mod_server_type
with open("config_file") as config:
_server_params = pickle.load(config)
del config
def __init__(self,...):
...
In the example above, your metaclass could consume the _server_type and _server_params attributes to further control the class creation.