How to ovveride an imported method by my own - python

I have a python file which can be resumed as follow :
from external_libs import save
class FakeClass1(MotherFakeClass1):
#property
def field(self):
if self.settings['save_parameter_booelan']:#settings come from the mother class but irelevant
import FakeClass2.save as save
# I want to override the save method by the one defined in the FakeClass2
return BehaviorModifierField(super(FakClass1, self).field) #The behavior Modifier decorate the new field but it's irelevant of what it does.
return super(FakClass1, self).field
def fakeMethod(self, boolean_val):
save('blabla')
class FakeClass2:
#staticmethod
def save(test):
#irrelevant core of the method
The idea is here but I struggle to find the right to do this.
I think I could do it more properly if I could move the FakeClass2 in another file but I don't want to.
Do you have a better idea ?

A staticmethod is not the right choice, as its invoked via Class.method. Your save is not used like that. And even if it were, it's on FakeClass, not FakeClass2.
If you want to invoke a different save depending on the class, just add a save METHOD (not function!) and use the function of choice. E.g.
from library import standard_save
class A:
def work(self):
self.save()
def save(self):
standard_save()
class B(A):
def save(self):
do_something_else()

Don't pass a boolean; pass the function to be used.
import external_libs
class FakeClass1:
def __init__(self, save_function=external_libs.save):
self.save = save_function
# Does this method do anything other than change the save function
# to use? If not, it can be eliminated.
def field(self):
# Use self.save as the function to save things
def fakeMethod(self, boolean_val):
self.save('blabla')
class FakeClass2:
#staticmethod
def save(test):
#irrelevant core of the method
instance1 = FakeClass1(FakeClass2.save)
instance2 = FakeClass1() # Default of external_libs.save

I think you are overcomplicating things.
Having a boolean value determining the behaviour of the save method means that the save method behaves differently depending on the instance of the class, according to the boolean value of the instance.
If this is what you want, this is the simplest way I can think.
class FakeClass1(MotherFakeClass1):
def __init__(self):
#your __init__ here
def save(self):
if self.settings['save_parameter_booelan']:
FakeClass2.save()
#or whatwever method you want to use in this case,
#even if is not this class method or another class method
else:
raise NotImplementedError
#or whatever code should be executed by save
#when save_parameter_boolean is false

Related

Difficulties with re-using a variable

here is a part of my code :
class projet(object):
def nameCouche(self):
valLissage = float(ui.valLissage.displayText())
return (valLissage)
valCouche = nameCouche() # asks for a positional argument but 'self' doesnt work
def choixTraitement(self):
ui.okLissage.clicked.connect(p.goLissage)
def goLissage(self, valCouche):
if ui.chkboxLissage.isChecked():
print(valCouche) # result is False
os.system(r'"C:\Program Files\FME\fme.exe" D:\Stelios\..... --MAX_NUM_POINTS {0}'.format(valCouche))
So I would like to use valCouche in goLissage method but it doesnt work.
I thought that valCouche would have the argument of valLissage but instead it gives False as a value.
I've tried different alternatives but still doesnt work.
You've got multiple problems here.
First, if you write this in the middle of a class definition:
valCouche = nameCouche()
... you're creating a class attribute, which is shared by all instances, not a normal instance attribute.
Also, you're running this at class definition time. That means there is no self yet--there aren't any instances yet to be self--so you can't call a method like nameCouche, because you don't have anything to call it on.
What you want to do is call the method at instance initialization time, on the instance being initialized, and store the return value in an instance attribute:
def __init__(self):
self.valCouche = self.nameCouche()
Then, when you want to access this value in another method later, you have to access it as self.valCouche.
If you make those changes, it will work. But your object model still doesn't make much sense. Why is nameCouche a method when it doesn't have anything to do with the object, and doesn't access any of its attributes? Maybe it makes sense as a #staticmethod, but really, I think it makes more sense just as a plain function outside the class. In fact, none of the code you've written seems to have anything to do with the class.
This kind of cram-everything-into-the-class design is often a sign that you're trying to write Java code in Python, and haven't yet really understood how Python does OO. You might want to read a good tutorial on Python classes. But briefly: if you're writing a class just to have somewhere to dump a bunch of vaguely-related functions, what you want is a module, not a class. If you have some reason to have instances of that class, and the functions all act on the data of each instance, then you want a class.
You have to declare variabile in the __init__ method (constructor) and then use it in your code
ex:
class projet(object):
def __init__(self):
self.valCouche = ''
def nameCouche(self):
valLissage = float(ui.valLissage.displayText())
return (valLissage)
def choixTraitement(self):
ui.okLissage.clicked.connect(p.goLissage)
def goLissage(self, valCouche):
if ui.chkboxLissage.isChecked():
self.valCouche = self.nameCouche()
print(self.valCouche) # result is False
os.system(r'"C:\Program Files\FME\fme.exe" D:\Stelios\..... --MAX_NUM_POINTS {0}'.format(self.valCouche))
you have to define an initialization function: def__init__(self)
defining valCouche as an instance attribute make it accessible on all the method so we have the following
class projet(object):
def __init__(self):
self.valCouche = ''
def nameCouche(self):
self.valCouche = float(ui.valLissage.displayText())
#staticmethod #here there is no need for self so it is a method of class
def choixTraitement():
ui.okLissage.clicked.connect(p.goLissage)
def goLissage(self):
if ui.chkboxLissage.isChecked():
print(self.valCouche) # result is False
os.system(r'"C:\Program Files\FME\fme.exe" D:\Stelios\..... --MAX_NUM_POINTS {0}'.format(self.valCouche))

calling private methods for class method: python

I am trying to implement multiple constructors in python and one of the suggestions (through online searching) was to use the classmethod. However, using this, I am having issues with code reuse and modularity. Here is an example where I can create an object based on a supplied file or through some other means:
class Image:
def __init__(self, filename):
self.image = lib.load(filename)
self.init_others()
#classmethod
def from_data(cls, data, header):
cls.image = lib.from_data(data, header)
cls.init_others()
return cos
def init_others(self):
# initialise some other variables
self.something = numpy.matrix(4,4)
Now it seems that I cannot do that. The cls.init_others() call fails by saying that I have not provided the object to call it on. I guess I can initialise things in the from_data function itself but then I repeat the code in the init method and the other "constructors". Does anyone know how I can call these other initialiser methods from these #classmethod marked functions? Or perhaps someone knows a better way to initialise these variables.
I come from a C++ background. So still trying to find my way around the python constructs.
Your class method should create and return a new instance of the class, not assign class attributes and return the class itself. As an alternative to the keyword arguments, you could do something like:
class Image:
def __init__(self, image):
self.image = image
self.init_others()
#classmethod
def from_data(cls, data, header):
return cls(lib.from_data(data, header))
#classmethod
def from_filename(cls, filename):
return cls(lib.load(filename))
def init_others(self):
# initialise some other variables
self.something = numpy.matrix(4, 4)
This adds the ability to create an instance if you already have the image, too.
I would recommend not trying to create multiple constructors, and use keyword arguments instead:
class Image(object):
def __init__(self, filename=None, data=None, header=None):
if filename is not None:
self.image = lib.load(filename)
elif data is not None and header is not None:
self.image = lib.from_data(data, header)
else:
raise ValueError("You must provide filename or both data and header")
self.init_others()
def init_others(self):
# initialise some other variables
self.something = numpy.matrix(4,4)
This is a more Pythonic way to handle this scenario.
You should always pass in self as the first argument to any method that will act on a class instance. Python will not automatically determine the instance you're trying to call the method for unless you do that. So if you want to use a class function like
the_image = Image("file.txt")
the_image.interpolate(foo,bar)
You need to define the method within Image as
def interpolate(self,foo,bar):
# Your code

How can I ensure that one of my class's methods is always called even if a subclass overrides it?

For example, I have a
class BaseHandler(object):
def prepare(self):
self.prepped = 1
I do not want everyone that subclasses BaseHandler and also wants to implement prepare to have to remember to call
super(SubBaseHandler, self).prepare()
Is there a way to ensure the superclass method is run even if the subclass also implements prepare?
I have solved this problem using a metaclass.
Using a metaclass allows the implementer of the BaseHandler to be sure that all subclasses will call the superclasses prepare() with no adjustment to any existing code.
The metaclass looks for an implementation of prepare on both classes and then overwrites the subclass prepare with one that calls superclass.prepare followed by subclass.prepare.
class MetaHandler(type):
def __new__(cls, name, bases, attrs):
instance = type.__new__(cls, name, bases, attrs)
super_instance = super(instance, instance)
if hasattr(super_instance, 'prepare') and hasattr(instance, 'prepare'):
super_prepare = getattr(super_instance, 'prepare')
sub_prepare = getattr(instance, 'prepare')
def new_prepare(self):
super_prepare(self)
sub_prepare(self)
setattr(instance, 'prepare', new_prepare)
return instance
class BaseHandler(object):
__metaclass__ = MetaHandler
def prepare(self):
print 'BaseHandler.prepare'
class SubHandler(BaseHandler):
def prepare(self):
print 'SubHandler.prepare'
Using it looks like this:
>>> sh = SubHandler()
>>> sh.prepare()
BaseHandler.prepare
SubHandler.prepare
Tell your developers to define prepare_hook instead of prepare, but
tell the users to call prepare:
class BaseHandler(object):
def prepare(self):
self.prepped = 1
self.prepare_hook()
def prepare_hook(self):
pass
class SubBaseHandler(BaseHandler):
def prepare_hook(self):
pass
foo = SubBaseHandler()
foo.prepare()
If you want more complex chaining of prepare calls from multiple subclasses, then your developers should really use super as that's what it was intended for.
Just accept that you have to tell people subclassing your class to call the base method when overriding it. Every other solution either requires you to explain them to do something else, or involves some un-pythonic hacks which could be circumvented too.
Python’s object inheritance model was designed to be open, and any try to go another way will just overcomplicate the problem which does not really exist anyway. Just tell everybody using your stuff to either follow your “rules”, or the program will mess up.
One explicit solution without too much magic going on would be to maintain a list of prepare call-backs:
class BaseHandler(object):
def __init__(self):
self.prepare_callbacks = []
def register_prepare_callback(self, callback):
self.prepare_callbacks.append(callback)
def prepare(self):
# Do BaseHandler preparation
for callback in self.prepare_callbacks:
callback()
class MyHandler(BaseHandler):
def __init__(self):
BaseHandler.__init__(self)
self.register_prepare_callback(self._prepare)
def _prepare(self):
# whatever
In general you can try using __getattribute__ to achive something like this (until the moment someone overwrites this method too), but it is against the Python ideas. There is a reason to be able to access private object members in Python. The reason is mentioned in import this

Use python decorators on class methods and subclass methods

Goal: Make it possible to decorate class methods. When a class method gets decorated, it gets stored in a dictionary so that other class methods can reference it by a string name.
Motivation: I want to implement the equivalent of ASP.Net's WebMethods. I am building this on top of google app engine, but that does not affect the point of difficulty that I am having.
How it Would look if it worked:
class UsefulClass(WebmethodBaseClass):
def someMethod(self, blah):
print(blah)
#webmethod
def webby(self, blah):
print(blah)
# the implementation of this class could be completely different, it does not matter
# the only important thing is having access to the web methods defined in sub classes
class WebmethodBaseClass():
def post(self, methodName):
webmethods[methodName]("kapow")
...
a = UsefulClass()
a.post("someMethod") # should error
a.post("webby") # prints "kapow"
There could be other ways to go about this. I am very open to suggestions
This is unnecessary. Just use getattr:
class WebmethodBaseClass():
def post(self, methodName):
getattr(self, methodName)("kapow")
The only caveat is that you have to make sure that only methods intended for use as webmethods can be used thus. The simplest solution, IMO, is to adopt the convention that non-webmethods start with an underscore and have the post method refuse to service such names.
If you really want to use decorators, try this:
def webmethod(f):
f.is_webmethod = True
return f
and get post to check for the existence of the is_webmethod attribute before calling the method.
This would seem to be the simplest approach to meet your specs as stated:
webmethods = {}
def webmethod(f):
webmethods[f.__name__] = f
return f
and, in WebmethodBaseClass,
def post(self, methodName):
webmethods[methodName](self, "kapow")
I suspect you want something different (e.g., separate namespaces for different subclasses vs a single global webmethods dictionary...?), but it's hard to guess without more info exactly how your desires differ from your specs -- so maybe you can tell us how this simplistic approach fails to achieve some of your desiderata, so it can be enriched according to what you actually want.
class UsefulClass(WebmethodBaseClass):
def someMethod(self, blah):
print(blah)
#webmethod
def webby(self, blah):
print(blah)
class WebmethodBaseClass():
def post(self, methodName):
method = getattr(self, methodName)
if method.webmethod:
method("kapow")
...
def webmethod(f):
f.webmethod = True
return f
a = UsefulClass()
a.post("someMethod") # should error
a.post("webby") # prints "kapow"

Using the docstring from one method to automatically overwrite that of another method

The problem: I have a class which contains a template method execute which calls another method _execute. Subclasses are supposed to overwrite _execute to implement some specific functionality. This functionality should be documented in the docstring of _execute.
Advanced users can create their own subclasses to extend the library. However, another user dealing with such a subclass should only use execute, so he won't see the correct docstring if he uses help(execute).
Therefore it would be nice to modify the base class in such a way that in a subclass the docstring of execute is automatically replaced with that of _execute. Any ideas how this might be done?
I was thinking of metaclasses to do this, to make this completely transparent to the user.
Well, if you don't mind copying the original method in the subclass, you can use the following technique.
import new
def copyfunc(func):
return new.function(func.func_code, func.func_globals, func.func_name,
func.func_defaults, func.func_closure)
class Metaclass(type):
def __new__(meta, name, bases, attrs):
for key in attrs.keys():
if key[0] == '_':
skey = key[1:]
for base in bases:
original = getattr(base, skey, None)
if original is not None:
copy = copyfunc(original)
copy.__doc__ = attrs[key].__doc__
attrs[skey] = copy
break
return type.__new__(meta, name, bases, attrs)
class Class(object):
__metaclass__ = Metaclass
def execute(self):
'''original doc-string'''
return self._execute()
class Subclass(Class):
def _execute(self):
'''sub-class doc-string'''
pass
Is there a reason you can't override the base class's execute function directly?
class Base(object):
def execute(self):
...
class Derived(Base):
def execute(self):
"""Docstring for derived class"""
Base.execute(self)
...stuff specific to Derived...
If you don't want to do the above:
Method objects don't support writing to the __doc__ attribute, so you have to change __doc__ in the actual function object. Since you don't want to override the one in the base class, you'd have to give each subclass its own copy of execute:
class Derived(Base):
def execute(self):
return Base.execute(self)
class _execute(self):
"""Docstring for subclass"""
...
execute.__doc__= _execute.__doc__
but this is similar to a roundabout way of redefining execute...
Look at the functools.wraps() decorator; it does all of this, but I don't know offhand if you can get it to run in the right context
Well the doc-string is stored in __doc__ so it wouldn't be too hard to re-assign it based on the doc-string of _execute after the fact.
Basically:
class MyClass(object):
def execute(self):
'''original doc-string'''
self._execute()
class SubClass(MyClass):
def _execute(self):
'''sub-class doc-string'''
pass
# re-assign doc-string of execute
def execute(self,*args,**kw):
return MyClass.execute(*args,**kw)
execute.__doc__=_execute.__doc__
Execute has to be re-declared to that the doc string gets attached to the version of execute for the SubClass and not for MyClass (which would otherwise interfere with other sub-classes).
That's not a very tidy way of doing it, but from the POV of the user of a library it should give the desired result. You could then wrap this up in a meta-class to make it easier for people who are sub-classing.
I agree that the simplest, most Pythonic way of approaching this is to simply redefine execute in your subclasses and have it call the execute method of the base class:
class Sub(Base):
def execute(self):
"""New docstring goes here"""
return Base.execute(self)
This is very little code to accomplish what you want; the only downside is that you must repeat this code in every subclass that extends Base. However, this is a small price to pay for the behavior you want.
If you want a sloppy and verbose way of making sure that the docstring for execute is dynamically generated, you can use the descriptor protocol, which would be significantly less code than the other proposals here. This is annoying because you can't just set a descriptor on an existing function, which means that execute must be written as a separate class with a __call__ method.
Here's the code to do this, but keep in mind that my above example is much simpler and more Pythonic:
class Executor(object):
__doc__ = property(lambda self: self.inst._execute.__doc__)
def __call__(self):
return self.inst._execute()
class Base(object):
execute = Executor()
class Sub(Base):
def __init__(self):
self.execute.inst = self
def _execute(self):
"""Actually does something!"""
return "Hello World!"
spam = Sub()
print spam.execute.__doc__ # prints "Actually does something!"
help(spam) # the execute method says "Actually does something!"

Categories