This question already has answers here:
Adding a method to an existing object instance in Python
(19 answers)
Closed 4 years ago.
Q: Is there a way to alter a method of an existing object in Python (3.6)? (By "method" I mean a function that is passed self as an argument.)
Example
Let's say I have a class Person having some very useful method SayHi():
class Person(object):
Cash = 100
def HasGoodMood(self):
return self.Cash > 10
def SayHi(self):
if self.HasGoodMood():
print('Hello!')
else:
print('Hmpf.')
>>> joe = Person()
>>> joe.SayHi()
Hello!
As you can see, the response of the person depends on their current mood computed by the method HasGoodMood(). A default person has good mood whenever they have more than 10$ cash on them.
I can easily create a person who does not care about the money and is happy all the time:
>>> joe.HasGoodMood = lambda: True
>>> joe.SayHi()
Hello!
>>> joe.Cash = 0
>>> joe.SayHi()
Hello!
Cool. Notice how Python knows that when using the original implementation of HasGoodMood, it passes silently self as the first argument, but if I change it to lambda: True, it calls the function with no arguments. The problem is: What if I want to change the default HasGoodMood for another function which would also accept self as a parameter?
Let's continue our example: what if I want to create a greedy Person who is only happy if they have more than 100$ on them? I would like to do something like:
>>> greedy_jack = Person()
>>> greedy_jack.HasGoodMood = lambda self: self.Cash > 100
TypeError: <lambda>() missing 1 required positional argument: 'self'
Unfortunately, this does not work. Is there some other way to change a method?
Disclaimer: The above example is just for demonstration purposes. I know that I could use inheritance or keep a cash threshold as a property of the Person. But that is not the point of the question.
Using some tips from:
Is it possible to change an instance's method implementation without changing all other instances of the same class?
you can do the following, by using the types module to assign a method to the object created without affecting the class. You need to do this because a function does not automatically receive the self object as the first variable, but a method does.
import types
joe = Person()
bob = Person()
joe.SayHi()
>>> Hello!
def greedy_has_good_mood(self):
return self.Cash > 100
joe.HasGoodMood = types.MethodType(greedy_has_good_mood, joe)
joe.SayHi()
>>> Hmpf.
bob.SayHi()
>>> Hello!
When you write a def in a class, and then call it on an instance, that's a method, and the mechanics of method-calling will fill in the self argument when you call it.
By assigning to HasGoodMood in your instance, you are not putting a new method there, but putting a function into the attribute. You can read the attribute to get the function, and call it, and though that looks like a method call, it's just calling a function that happens to be stored in an attribute. You won't get the self parameter supplied automatically.
But you already know what self is going to be, since you're assigning this function into one particular object.
greedy_jack.HasGoodMood = (lambda self=greedy_jack: self.Cash > 100)
This associates the function argument self with the current value of the variable greedy_jack.
Lambdas in Python can only be one line. If you needed a longer function, you could use a def instead.
def greedy_jack_HasGoodMood(self=greedy_jack):
return self.Cash > 100
greedy_jack.HasGoodMood = greedy_jack_HasGoodMood
For a less hacky solution, see Andrew McDowell's answer.
Inheritance is the way to go.
It can be something as simple as:
class Person(object):
Cash = 100
def HasGoodMood(self):
return self.Cash > 10
def SayHi(self):
if self.HasGoodMood():
print('Hello!')
else:
print('Hmpf.')
class newPersonObject(Person):
def HasGoodMood(self):
return self.Cash > 100
>>> greedy = newClassPerson()
>>> greedy.SayHi()
hmpf
When you do greedy_jack.HasGoodMood = lambda self: self.Cash > 100 you're somewhat doing the same thing. You're only overriding greedy_jacks attributes. Using the way mentioned about, you can create greedy people, happy people, forever unhappy people, hippies etc.
A better option in my opinion would be to accept cash a parameter while defining the object. Hence, you can dynamically make people greedy or normal. (not tested)
class Person(object):
def __init__(self, cash_to_be_happy):
self.cash = cash_to_be_happy
def HasGoodMood(self, has_money):
return has_money > self.cash
def SayHi(self, has_money):
if self.HasGoodMood(has_money):
print('Hello!')
else:
print('Hmpf.')
>>> joe = Person(100)
>>> joe.SayHi(150)
Hello!
>>> greedy_joe = Person(200)
>>> greedy_joe.SayHi(150)
Hmpf
Related
I apologize if I'm butchering the terminology. I'm trying to understand the code in this example on how to chain a custom function onto a PySpark dataframe. I'd really want to understand exactly what it's doing, and if it is not awful practice before I implement anything.
From the way I'm understanding the code, it:
defines a function g with sub-functions inside of it, that returns a copy of itself
assigns the sub-functions to g as attributes
assigns g as a property of the DataFrame class
I don't think at any step in the process do any of them become a method (when I do getattr, it always says "function")
When I run a (as best as I can do) simplified version of the code (below), it seems like only when I assign the function as a property to a class, and then instantiate at least one copy of the class, do the attributes on the function become available (even outside of the class). I want to understand what and why that is happening.
An answer [here(https://stackoverflow.com/a/17007966/19871699) indicates that this is a behavior, but doesn't really explain what/why it is. I've read this too but I'm having trouble seeing the connection to the code above.
I read here about the setattr part of the code. He doesn't mention exactly the use case above. this post has some use cases where people do it, but I'm not understanding how it directly applies to the above, unless I've missed something.
The confusing part is when the inner attributes become available.
class SampleClass():
def __init__(self):
pass
def my_custom_attribute(self):
def inner_function_one():
pass
setattr(my_custom_attribute,"inner_function",inner_function_one)
return my_custom_attribute
[x for x in dir(my_custom_attribute) if x[0] != "_"]
returns []
then when I do:
SampleClass.custom_attribute = property(my_custom_attribute)
[x for x in dir(my_custom_attribute) if x[0] != "_"]
it returns []
but when I do:
class_instance = SampleClass()
class_instance.custom_attribute
[x for x in dir(my_custom_attribute) if x[0] != "_"]
it returns ['inner_function']
In the code above though, if I do SampleClass.custom_attribute = my_custom_attribute instead of =property(...) the [x for x... code still returns [].
edit: I'm not intending to access the function itself outside of the class. I just don't understand the behavior, and don't like implementing something I don't understand.
So, setattr is not relevant here. This would all work exactly the same without it, say, by just doing my_custom_attribute.inner_function = inner_function_one etc. What is relevant is that the approach in the link you showed (which your example doesn't exactly make clear what the purpose is) relies on using a property, which is a descriptor. But the function won't get called unless you access the attribute corresponding to the property on an instance. This comes down to how property works. For any property, given a class Foo:
Foo.attribute_name = property(some_function)
Then some_function won't get called until you do Foo().attribute_name. That is the whole point of property.
But this whole solution is very confusingly engineered. It relies on the above behavior, and it sets attributes on the function object.
Note, if all you want to do is add some method to your DataFrame class, you don't need any of this. Consider the following example (using pandas for simplicity):
>>> import pandas as pd
>>> def foobar(self):
... print("in foobar with instance", self)
...
>>> pd.DataFrame.baz = foobar
>>> df = pd.DataFrame(dict(x=[1,2,3], y=['a','b','c']))
>>> df
x y
0 1 a
1 2 b
2 3 c
>>> df.baz()
in foobar with instance x y
0 1 a
1 2 b
2 3 c
That's it. You don't need all that rigamarole. Of course, if you wanted to add a nested accessor, df.custom.whatever, you would need something a bit more complicated. You could use the approach in the OP, but I would prefer something more explicit:
import pandas as pd
class AccessorDelegator:
def __init__(self, accessor_type):
self.accessor_type = accessor_type
def __get__(self, instance, cls=None):
return self.accessor_type(instance)
class CustomMethods:
def __init__(self, instance):
self.instance = instance
def foo(self):
# do something with self.instance as if this were your `self` on the dataframe being augmented
print(self.instance.value_counts())
pd.DataFrame.custom = AccessorDelegator(CustomMethods)
df = pd.DataFrame(dict(a=[1,2,3], b=['a','b','c']))
df.foo()
The above will print:
a b
1 a 1
2 b 1
3 c 1
Because when you call a function the attributes within that function aren't returned only the returned value is passed back.
In other words the additional attributes are only available on the returned function and not with 'g' itself.
Try moving setattr() outside of the function.
I have a simple class as:
connection has db connection
import pandas as pd
from animal import kettle
class cat:
def foo(connection):
a=pd.read_sql('select * from zoo',connection)
return1= kettle.boo1(a)
return2= kettle.boo2(a)
return return1,return2
Now I want to pass a to both boo1 and boo2 of kettle, am I passing it the correct way in above foo()?
I thought above way is correct and I tried this way , but is this correct way to pass?
animal.py:
class kettle:
def boo1(return1):
print(return1)
def boo2(return2):
print(return2)
sorry if this doesn't make any sense,
my intention is passing a to both boo1 and boo2 of kettle class
This looks like the correct approach to me: by assigning the return value of pd.read_sql('select * from zoo', connection) to a first and then passing a to kettle.boo1 and kettle.boo2 you ensure you only do the potentially time-consuming database IO only once.
One thing to keep in mind with this design pattern when you are passing objects such as lists/dicts/dataframes is the question of whether kettle.boo1 changes the value that is in a. If it does, kettle.boo2 will receive the modified version of a as an input, which can lead to unexpected behavior.
A very minimal example is the following:
>>> def foo(x):
... x[0] = 'b'
...
>>> x = ['a'] # define a list of length 1
>>> foo(x) # call a function that modifies the first element in x
>>> print(x) # the value in x has changed
['b']
There are (many) possible solutions for your problem, whatever that might be. I assume you just start out object oriented programming in Python, and get errors along the lines of
unbound method boo1() must be called with kettle instance as first argument
and probably want this solution:
Give your class methods an instance parameter:
def boo1(self, return1):
Instantiate the class kettle in cat.foo:
k = kettle()
Then use it like:
k.boo1(a)
Same for the boo2 method.
Also you probably want to:
return return1 # instead of or after print(return1)
as your methods return None at the moment.
I'm making a game in pygame and I have made an 'abstract' class that's sole job is to store the sprites for a given level (with the intent of having these level objects in a list to facilitate the player being moved from one level to another)
Alright, so to the question. If I can do the equivalent of this in Python(code curtesy of Java):
Object object = new Object (){
public void overriddenFunction(){
//new functionality
};
};
Than when I build the levels in the game I would simply have to override the constructor (or a class/instance method that is responsible for building the level) with the information on where the sprites go, because making a new class for every level in the game isn't that elegant of an answer. Alternatively I would have to make methods within the level class that would then build the level once a level object is instantiated, placing the sprites as needed.
So, before one of the more stanch developers goes on about how anti-python this might be (I've read enough of this site to get that vibe from Python experts) just tell me if its doable.
Yes, you can!
class Foo:
def do_other(self):
print('other!')
def do_foo(self):
print('foo!')
def do_baz():
print('baz!')
def do_bar(self):
print('bar!')
# Class-wide impact
Foo.do_foo = do_bar
f = Foo()
g = Foo()
# Instance-wide impact
g.do_other = do_baz
f.do_foo() # prints "bar!"
f.do_other() # prints "other!"
g.do_foo() # prints "bar!"
g.do_other() # prints "baz!"
So, before one of the more stanch developers goes on about how anti-python this might be
Overwriting functions in this fashion (if you have a good reason to do so) seems reasonably pythonic to me. An example of one reason/way for which you might have to do this would be if you had a dynamic feature for which static inheritance didn't or couldn't apply.
The case against might be found in the Zen of Python:
Beautiful is better than ugly.
Readability counts.
If the implementation is hard to explain, it's a bad idea.
Yes, it's doable. Here, I use functools.partial to get the implied self argument into a regular (non-class-method) function:
import functools
class WackyCount(object):
"it's a counter, but it has one wacky method"
def __init__(self, name, value):
self.name = name
self.value = value
def __str__(self):
return '%s = %d' % (self.name, self.value)
def incr(self):
self.value += 1
def decr(self):
self.value -= 1
def wacky_incr(self):
self.value += random.randint(5, 9)
# although x is a regular wacky counter...
x = WackyCount('spam', 1)
# it increments like crazy:
def spam_incr(self):
self.value *= 2
x.incr = functools.partial(spam_incr, x)
print (x)
x.incr()
print (x)
x.incr()
print (x)
x.incr()
print (x)
and:
$ python2.7 wacky.py
spam = 1
spam = 2
spam = 4
spam = 8
$ python3.2 wacky.py
spam = 1
spam = 2
spam = 4
spam = 8
Edit to add note: this is a per-instance override. It takes advantage of Python's attribute look-up sequence: if x is an instance of class K, then x.attrname starts by looking at x's dictionary to find the attribute. If not found, the next lookup is in K. All the normal class functions are actually K.func. So if you want to replace the class function dynamically, use #Brian Cane's answer instead.
I'd suggest using a different class, via inheritance, for each level.
But you might get some mileage out of copy.deepcopy() and monkey patching, if you're really married to treating Python like Java.
this works in the desired way:
class d:
def __init__(self,arg):
self.a = arg
def p(self):
print "a= ",self.a
x = d(1)
y = d(2)
x.p()
y.p()
yielding
a= 1
a= 2
i've tried eliminating the "self"s and using a global statement in __init__
class d:
def __init__(self,arg):
global a
a = arg
def p(self):
print "a= ",a
x = d(1)
y = d(2)
x.p()
y.p()
yielding, undesirably:
a= 2
a= 2
is there a way to write it without having to use "self"?
"self" is the way how Python works. So the answer is: No! If you want to cut hair: You don't have to use "self". Any other name will do also. ;-)
Python methods are just functions that are bound to the class or instance of a class. The only difference is that a method (aka bound function) expects the instance object as the first argument. Additionally when you invoke a method from an instance, it automatically passes the instance as the first argument. So by defining self in a method, you're telling it the namespace to work with.
This way when you specify self.a the method knows you're modifying the instance variable a that is part of the instance namespace.
Python scoping works from the inside out, so each function (or method) has its own namespace. If you create a variable a locally from within the method p (these names suck BTW), it is distinct from that of self.a. Example using your code:
class d:
def __init__(self,arg):
self.a = arg
def p(self):
a = self.a - 99
print "my a= ", a
print "instance a= ",self.a
x = d(1)
y = d(2)
x.p()
y.p()
Which yields:
my a= -98
instance a= 1
my a= -97
instance a= 2
Lastly, you don't have to call the first variable self. You could call it whatever you want, although you really shouldn't. It's convention to define and reference self from within methods, so if you care at all about other people reading your code without wanting to kill you, stick to the convention!
Further reading:
Python Classes tutorial
When you remove the self's, you end up having only one variable called a that will be shared not only amongst all your d objects but also in your entire execution environment.
You can't just eliminate the self's for this reason.
This question already has answers here:
How to access (get or set) object attribute given string corresponding to name of that attribute
(3 answers)
Closed 3 years ago.
I have a Python class that have attributes named: date1, date2, date3, etc.
During runtime, I have a variable i, which is an integer.
What I want to do is to access the appropriate date attribute in run time based on the value of i.
For example,
if i == 1, I want to access myobject.date1
if i == 2, I want to access myobject.date2
And I want to do something similar for class instead of attribute.
For example, I have a bunch of classes: MyClass1, MyClass2, MyClass3, etc. And I have a variable k.
if k == 1, I want to instantiate a new instance of MyClass1
if k == 2, I want to instantiate a new instance of MyClass2
How can i do that?
EDIT
I'm hoping to avoid using a giant if-then-else statement to select the appropriate attribute/class.
Is there a way in Python to compose the class name on the fly using the value of a variable?
You can use getattr() to access a property when you don't know its name until runtime:
obj = myobject()
i = 7
date7 = getattr(obj, 'date%d' % i) # same as obj.date7
If you keep your numbered classes in a module called foo, you can use getattr() again to access them by number.
foo.py:
class Class1: pass
class Class2: pass
[ etc ]
bar.py:
import foo
i = 3
someClass = getattr(foo, "Class%d" % i) # Same as someClass = foo.Class3
obj = someClass() # someClass is a pointer to foo.Class3
# short version:
obj = getattr(foo, "Class%d" % i)()
Having said all that, you really should avoid this sort of thing because you will never be able to find out where these numbered properties and classes are being used except by reading through your entire codebase. You are better off putting everything in a dictionary.
For the first case, you should be able to do:
getattr(myobject, 'date%s' % i)
For the second case, you can do:
myobject = locals()['MyClass%s' % k]()
However, the fact that you need to do this in the first place can be a sign that you're approaching the problem in a very non-Pythonic way.
OK, well... It seems like this needs a bit of work. Firstly, for your date* things, they should be perhaps stored as a dict of attributes. eg, myobj.dates[1], so on.
For the classes, it sounds like you want polymorphism. All of your MyClass* classes should have a common ancestor. The ancestor's __new__ method should figure out which of its children to instantiate.
One way for the parent to know what to make is to keep a dict of the children. There are ways that the parent class doesn't need to enumerate its children by searching for all of its subclasses but it's a bit more complex to implement. See here for more info on how you might take that approach. Read the comments especially, they expand on it.
class Parent(object):
_children = {
1: MyClass1,
2: MyClass2,
}
def __new__(k):
return object.__new__(Parent._children[k])
class MyClass1(Parent):
def __init__(self):
self.foo = 1
class MyClass2(Parent):
def __init__(self):
self.foo = 2
bar = Parent(1)
print bar.foo # 1
baz = Parent(2)
print bar.foo # 2
Thirdly, you really should rethink your variable naming. Don't use numbers to enumerate your variables, instead give them meaningful names. i and k are bad to use as they are by convention reserved for loop indexes.
A sample of your existing code would be very helpful in improving it.
to get a list of all the attributes, try:
dir(<class instance>)
I agree with Daenyth, but if you're feeling sassy you can use the dict method that comes with all classes:
>>> class nullclass(object):
def nullmethod():
pass
>>> nullclass.__dict__.keys()
['__dict__', '__module__', '__weakref__', 'nullmethod', '__doc__']
>>> nullclass.__dict__["nullmethod"]
<function nullmethod at 0x013366A8>