I tried to manipulate the __mro__ but it it read-only
The use case is as follow:
The Connection object created from pyodbc (a DBAPI) used to provide a property called 'autocommit'. Lately I have wrapped a SQLAlchemy db connection pool around the pyodbc for better resource management. The new db pool will return a _ConnectionFairy, a connection proxy class, which no longer exposes the autocommit property.
I would very much want to leave the thrid party code alone. So inheritance of _ConnectionFairy is not really an option (I might need to override the Pool class to change how it creates a connection proxy. For source code, please see here)
A rather not-so elegant solution is to change all occurance of
conn.autocommit = True
to
# original connection object is accessible via .connection
conn.connection.autocommit = True
So, I would like to know if it is possible at all to inject a set of getter, setter and property to an instance of _ConnectionFairy
You can "extend" almost any class using following syntax:
def new_func(self, param):
print param
class a:
pass
a.my_func = new_func
b = a()
b.my_func(10)
UPDATE
If you want to create some kind of wrappers for some methods you can use getattr and setattr to save original method and replace it with your implementation. I've done it in my project but in a bit different way:
Here is an example:
class A:
def __init__(self):
setattr(self, 'prepare_orig', getattr(self,'prepare'))
setattr(self, 'prepare', getattr(self,'prepare_wrapper'))
def prepare_wrapper(self,*args,**kwargs):
def prepare_thread(*args,**kwargs):
try:
self.prepare_orig(*args,**kwargs)
except:
print "Unexpected error:", sys.exc_info()[0]
t = threading.Thread(target=prepare_thread, args=args, kwargs=kwargs)
t.start()
def prepare(self):
pass
The idea of this code that other developer can just implement prepare method in derived classed and it will be executed in the background. It is not what you asked but I hope it will help you in some way.
Related
I'm trying to create a class for interacting with a MySQL Database. This class will connect to the database storing the connection object and cursor. Allowing for multiple actions without reconnecting every time. Once you're done with it, the cursor and connection need to be closed. So I've implemented it as a context manager and it works great.
I need the support to be able to do multiple actions without reopening the connection each time, but most of the time I am only running one Query/command. So I end up having to write the below code to run a single query.
with MySQLConnector() as connector:
connector.RunQuery(QUERY)
I thought it would be nice to be able to run single line commands like this
MySQLConnector().RunQuery(QUERY)
but this leaves the connection open.
I then thought this implementation would allow me to do single and multiple actions all nicely context managed.
class MySQLConnector():
#staticmethod
def RunQuery(query):
with MySQLConnector() as connector:
connector.RunQuery(query)
def RunQuery(self, query):
self.cursor.execute(query)
self.connection.commit()
# Single Actions
MySQLConnector.RunQuery(QUERY)
# Mulitple Actions
with MySQLConnector() as connector:
connector.RunQuery(QUERY_1)
connector.RunQuery(QUERY_2)
However, the method function seems to override the static method and only one way will work at a time. I know I could just have two separately named functions but I was wondering if there was a better/more pythonic way to implement this?
As this answer points out, you can use some __init__ trickery to redefine the method if it is being used as non-static (as __init__ will not run for static methods). Change the non-static RunQuery to something like _instance_RunQuery, then set self.RunQuery = self._instance_RunQuery. Like this:
class A:
def __init__(self, val):
self.add = self._instance_add
self.val = val
#staticmethod
def add(a, b):
obj = A(a)
return obj.add(b)
def _instance_add(self, b):
return self.val + b
ten = A(10)
print(ten.add(5))
print(A.add(10, 5))
I need to change a class variable but I don't know if I can do it at runtime because I'm using third party open source software and I'm not very well experienced with pure python inheritance (the software I use provide a custom inheritance system) and python in general. My idea was to inherit the class and change the constructor, but then the objects are initialized with original software classname, so they are not initialized with my init..
I partially solved it by inheriting classes that are using the original software class name and override methods so now they use my classes, but I still cannot reach every method because some of them are static.
Full step about what I was trying to do can be found here
Inherit class Worker on Odoo15
I think that I clould solve the problem if I can use something, like a decorator or something else, to tell parent classes to 'look down' to child class constructor when they are initialized. Is there a way to do that?
I will provide an example:
class A(object):
def __init__(self):
self.timeout = 0
def print_class_info(self):
print(f'Obj A timeout: {self.timeout}')
class B(A):
def __init__(self):
super().__init__()
self.timeout = 10
def print_class_info(self):
print(f'Obj B timeout: {self.timeout}')
# Is it possible, somehow, to make obj_A use the init of B class
# even if the call to the class is on class A?
obj_A = A()
obj_B = B()
obj_A.print_class_info()
obj_B.print_class_info()
OUT:
Obj A timeout: 0
Obj B timeout: 10
Of course, situation is more complex in the real scenario so I'm not sure if I can simply access object A and setup the class variable, I think I would have to do it at runtime, probably need a server restart and I'm not even sure how to access objects at runtime, as I said I'm not very well experienced with pure python.
Maybe there is an easy way but I just don't see it or know it, is it possible to use a subclass constructor with a parent class call, basically?
You can assign any attribute to a class, including a method. This is called monkey patching
# save the old init function
A.__oldinit__ = A.__init__
# create a new function that calls the old one
def custom_init(self):
self.__oldinit__()
self.timeout = 10
# overwrite the old function
# the actual old function will still exist because
# it's referenced as A.__oldinit__ as well
A.__init__ = custom_init
# optional cleanup
del custom_init
I need some help in terms of 'pythonic' way of handling a specific scenario.
I'm writing an Ssh class (wraps paramiko) that provides the capability to connect to and executes commands on a device under test (DUT) over ssh.
class Ssh:
def connect(some_params):
# establishes connection
def execute_command(command):
# executes command and returns response
def disconnect(some_params):
# closes connection
Next, I'd like to create a Dut class that represents my device under test. It has other things, besides capability to execute commands on the device over ssh. It exposes a wrapper for command execution that internally invokes the Ssh's execute_command. The Ssh may change to something else in future - hence the wrapper.
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
def execute_command(command)
return self.ssh.execute_command(command)
Next, the device supports a custom command line interface for device under test. So, a class that accepts a DUT object as an input and exposes a method to execute the customised command.
def CustomCli:
def __init__(dut_object):
self.dut = dut_object
def _customize(command):
# return customised command
def execute_custom_command(command):
return self.dut.execute_command(_customize(command))
Each of the classes can be used independently (CustomCli would need a Dut object though).
Now, to simplify things for user, I'd like to expose a wrapper for CustomCli in the Dut class. This'll allow the creator of the Dut class to exeute a simple or custom command.
So, I modify the Dut class as below:
def Dut:
def __init__(some params):
self.ssh = Ssh(blah blah)
self.custom_cli = Custom_cli(self) ;# how to avoid this circular reference in a pythonic way?
def execute_command(command)
return self.ssh.execute_command(command)
def execute_custom_command(command)
return self.custom_cli.execute_custom_command(command)
This will work, I suppose. But, in the process I've created a circular reference - Dut is pointing to CustomCli and CustomCli has a reference to it's creator Dut instance. This doesn't seem to be the correct design.
What's the best/pythonic way to deal with this?
Any help would be appreciated!
Regards
Sharad
In general, circular references aren't a bad thing. Many programs will have them, and people just don't notice because there's another instance in-between like A->B->C->A. Python's garbage collector will properly take care of such constructs.
You can make circular references a bit easier on your conscience by using weak references. See the weakref module. This won't work in your case, however.
If you want to get rid of the circular reference, there are two way:
Have CustomCLI inherit from Dut, so you end up with just one instance. You might want to read up on Mixins.
class CLIMerger(Dut):
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIMixin(object):
# inherit from object, won't work on its own
def execute_custom_command(command):
return self.execute_command(_customize(command))
# use self^ instead of self.dut
class CLIDut(Dut, CLIMixin):
# now the mixin "works", but still could enhance other Duts the same way
pass
The Mixin is advantageous if you need several cases of merging a CLI and Dut.
Have an explicit interface class that combines CustomCli and Dut.
class DutCLI(object):
def __init__(self, *bla, **blah):
self.dut = Dut(*bla, **blah)
self.cli = CustomCLI(self.dut)
This requires you to write boilerplate or magic to forward every call from DutCLI to either dut or cli.
I have started writing a simple module for mongodb to use. I am new to python and I have been a problem with the code I wrote:
import pymongo
class mongoDB():
conn = object
def __init__(self):
global conn
self.conn = pymongo.Connection("localhost",27017)
def CreateCollection(self,name =""):
self.dbCollection = conn.name
return self.dbCollection
if __name__ == '__main__':
database = mongoDB
collection = database.CreateCollection("Hello")
Firstly I think there are probably few things wrong with my code if you can spot it and correct me. Also I am keep getting this erro:
collection = database.CreateCollection("Hello")
TypeError: unbound method CreateCollection() must be called with mongoDB
instance as first argument (got str instance instead)
I want to be able to create the connection in the constructor of the class and then have a method for creating a collection and returning it, and also another method to insert delete and update the entities
So, syntax wise you have a number of problems. It looks like you're mixing a couple of tutorials in different ways. So, firstly I'll explain what is going on with your code and explain why you're seeing what you're seeing:
import pymongo
class mongoDB(): # you don't need ()'s here - only if you are inheriting classes
# you could inherit from object here, which is a good practice
# by doing class mongoDb(object):, otherwise you can just take
# them out
conn = object # here, you're defining a class member - global for all instances
# generally, you don't instantiate an object pointer like this,
# you would set it to None instead. It won't fail doing this,
# but it's not "right"
def __init__(self):
# the __init__ method is the constructor method - this will
# allow you to initialize a particular instance of your class, represented
# by the self argument. This method is called when you call the class, i.e.
# inst = mongoDb()
# in this case, the conn variable is not a global. Globals are defined
# at the root module level - so in this example, only pymongo is a global
# conn is a class member, and would be accessed by doing mongoDB.conn
global conn
# with that being said, you're initializing a local variable here called conn
# that is not being stored anywhere - when this method finishes, this variable
# will be cleaned up from memory, what you are thinking you're doing here
# should be written as mongoDB.conn = pymongo.Connection("localhost", 27017)
conn = pymongo.Connection("localhost",27017)
def CreateCollection(name =""):
# there is one of two things you are trying to do here - 1, access a class
# level member called conn, or 2, access an instance member called conn
# depending on what you are going for, there are a couple of different ways
# to address it.
# all methods for a class, by default, are instance methods - and all of them
# need to take self as the first argument. An instance method of a class
# will always be called with the instance first. Your error is caused because
# you should declare the method as:
# def CreateCollection(self, name = ""):
# The alternative, is to define this method as a static method of the class -
# which does not take an instance but applies to all instances of the class
# to do that, you would add a #staticmethod decorator before the method.
# either way, you're attempting to access the global variable "conn" here,
# which again does not exist
# the second problem with this, is that you are trying to take your variable
# argument (name) and use it as a property. What python is doing here, is
# looking for a member variable called name from the conn object. What you
# are really trying to do is create a collection on the connection with the
# inputed name
# the pymongo class provides access to your collections via this method as a
# convenience around the method, create_collection. In the case where you
# are using a variable to create the collection, you would call this by doing
# conn.create_collection(name)
# but again, that assumes conn is what you think it is, which it isn't
dbCollection = conn.name
return dbCollection
if __name__ == '__main__':
# here you are just creating a pointer to your class, not instantiating it
# you are looking for:
# database = mongoDB()
database = mongoDB
# this is your error, because of the afore mentioned lack of 'self' argument
collection = database.CreateCollection("Hello")
I'd say have a look through the Pep-8 (http://www.python.org/dev/peps/pep-0008/) coding style guides (very helpful) to learn about how to make your code "flow" pythonically.
Having gone through your code to explain what is going on - this is what you are ultimately trying to do:
import pymongo
class MongoDB: # Classes generally are camel-case, starting with uppercase
def __init__(self, dbname):
# the __init__ method is the class constructor, where you define
# instance members. We'll make conn an instance member rather
# than a class level member
self._conn = pymongo.Connection("localhost", 27017)
self._db = self._conn[dbname]
# methods usually start with lowercase, and are either camel case (less desirable
# by Python standards) or underscored (more desirable)
# All instance methods require the 1st argument to be self (pointer to the
# instance being affected)
def createCollection(self, name=""):
return self._db[name]
if __name__ == '__main__':
# you want to initialize the class
database = MongoDB("Hello")
collection = database.createCollection("MyTable")
Given that tho - what is the goal of writing this class wrapper? The same could be written as:
import pymongo
conn = pymongo.Connection('localhost', 27017)
database = conn["Hello"]
collection = database["MyTable"]
If you're trying to create a larger API wrapped around the pymongo database, then I'd recommend looking into some ORM modules that have already been built. There are some out there - not 100% sure which ones are available for MongoDB, but the one I use (I am biased, I wrote it) is called ORB, and can be found at http://docs.projexsoftware.com/api/orb
This is not a specific answer to how to solve your problem, but instead an answer for how to step through what you want to do and work on simpler problems as they arise:
1) Forget about classes at first, and instead
2) Use the Python command line or a Python program like IDLE,
3) To establish your goals by writing calls to open the MongoDB database to accomplish your task or tasks. In other words, write the simplest code to accomplish your goals before worrying about classes.
4) Once you get that done, and feel good to move on, then write a test class using the documentation. My link is one of many you could find.
5) I think part, but not all, of your problem is your class is not set up correctly. My class -- not shown completely -- is as follows:
class GlobalData:
def set_xfer_date(self, value):
self.xfer_date = value
self.date_str = str(value)
self.xfer_date_base = self.date_str[0:10] + " " + "00:00:00"
# Now build tomorrow.
self.xfer_date_tomorrow = datetime.today() + timedelta(1)
self.date_str_tomorrow = str(self.xfer_date_tomorrow)
self.xfer_date_tomorrow = \
self.date_str_tomorrow[0:10] + " " + "00:00:00"
self.section_totals = {}
self.billable_reads = {}
def set_xfer_fnam_out(self, value):
self.xfer_fnam_out = value
def set_xfer_dir_in(self, value):
self.xfer_dir_in = value
.
.
.
def get_billable_reads(self):
return self.billable_reads
One of the problems I see is you are not referring to data using self.
Good luck.
For example, I have a
class BaseHandler(object):
def prepare(self):
self.prepped = 1
I do not want everyone that subclasses BaseHandler and also wants to implement prepare to have to remember to call
super(SubBaseHandler, self).prepare()
Is there a way to ensure the superclass method is run even if the subclass also implements prepare?
I have solved this problem using a metaclass.
Using a metaclass allows the implementer of the BaseHandler to be sure that all subclasses will call the superclasses prepare() with no adjustment to any existing code.
The metaclass looks for an implementation of prepare on both classes and then overwrites the subclass prepare with one that calls superclass.prepare followed by subclass.prepare.
class MetaHandler(type):
def __new__(cls, name, bases, attrs):
instance = type.__new__(cls, name, bases, attrs)
super_instance = super(instance, instance)
if hasattr(super_instance, 'prepare') and hasattr(instance, 'prepare'):
super_prepare = getattr(super_instance, 'prepare')
sub_prepare = getattr(instance, 'prepare')
def new_prepare(self):
super_prepare(self)
sub_prepare(self)
setattr(instance, 'prepare', new_prepare)
return instance
class BaseHandler(object):
__metaclass__ = MetaHandler
def prepare(self):
print 'BaseHandler.prepare'
class SubHandler(BaseHandler):
def prepare(self):
print 'SubHandler.prepare'
Using it looks like this:
>>> sh = SubHandler()
>>> sh.prepare()
BaseHandler.prepare
SubHandler.prepare
Tell your developers to define prepare_hook instead of prepare, but
tell the users to call prepare:
class BaseHandler(object):
def prepare(self):
self.prepped = 1
self.prepare_hook()
def prepare_hook(self):
pass
class SubBaseHandler(BaseHandler):
def prepare_hook(self):
pass
foo = SubBaseHandler()
foo.prepare()
If you want more complex chaining of prepare calls from multiple subclasses, then your developers should really use super as that's what it was intended for.
Just accept that you have to tell people subclassing your class to call the base method when overriding it. Every other solution either requires you to explain them to do something else, or involves some un-pythonic hacks which could be circumvented too.
Python’s object inheritance model was designed to be open, and any try to go another way will just overcomplicate the problem which does not really exist anyway. Just tell everybody using your stuff to either follow your “rules”, or the program will mess up.
One explicit solution without too much magic going on would be to maintain a list of prepare call-backs:
class BaseHandler(object):
def __init__(self):
self.prepare_callbacks = []
def register_prepare_callback(self, callback):
self.prepare_callbacks.append(callback)
def prepare(self):
# Do BaseHandler preparation
for callback in self.prepare_callbacks:
callback()
class MyHandler(BaseHandler):
def __init__(self):
BaseHandler.__init__(self)
self.register_prepare_callback(self._prepare)
def _prepare(self):
# whatever
In general you can try using __getattribute__ to achive something like this (until the moment someone overwrites this method too), but it is against the Python ideas. There is a reason to be able to access private object members in Python. The reason is mentioned in import this