Pass a class atrribute to function outside of the class - python

I've created a class called "Test" with attributes "hp" and "max_hp". Now I want to pass these attributes as arguments to an outside function (it is outside the class since I want to use it with other instances of other classes), and add 1 to the attribute if it is not equal to the max attribute. My problem is that even though the function is called and test.hp and test.max_hp passed as arguments, they don't change.
Here's the code:
class Test:
def __init__(self):
self.hp = 10
self.max_hp = self.hp
def recovery_action(attribute, max_attribute):
if attribute == max_attribute:
pass
else:
attribute += 1
test = Test()
test.hp -= 5
print(f'test.hp - {test.hp}')
recovery_action(test.hp, test.max_hp)
print(f'test.hp after recovery - {test.hp}')
The problem is that the output looks like this:
test.hp - 5
test.hp after recovery - 5
I passed the test.hp as an argument and it wasn't equal to test.max_hp, so 1 should have been added to it - but it stayed the same. What am I doing wrong here?

The problem is you are not really changing the attribute in your function.
Basically what you do is call recovery_action(5,10).
If you want to actually change the attribute, you can return the result and assign it, like that:
def recovery_action(attribute, max_attribute):
if attribute != max_attribute:
return attribute + 1
return attribute
Then you can run:
test.hp = recovery_action(test.hp, test.max_hp)

The variable attribute lives in the scope of recovery_action. However, what you would want is to actually reference the class, so that changes are saved outside the scope of the function. A solution could be:
def recovery_action(test: Test):
if test.hp == test.max_hp:
pass
else:
test.hp += 1
This function passes an instance of Test to the function (hence the : Test part). The rest of the function is straightforward. Now, you address the variable in the object you pass to the function and thus remains changed after the function ends.

There is no object representing the attribute itself; test.max_hp is just an expression that evaluates to the value of the attribute. Since an int is immutable, you can't change the value by passing it as an argument. You would need to pass the object and the name of the attribute as two separate arguments, and have the function operate on the object directly.
def recovery_action(obj, attribute, max_):
if getattr(obj, attribute) == max_:
pass
else:
setattr(obj, attribute, attribute + 1)
recovery_action(test, 'hp', test.max_hp)
Note that your function doesn't care that the maximum value comes from an object attribute; it only cares that it is some integer.

Changing the argument of a function will not change the variable you have passed in to that function. You can return the changed variable and reassign the class attribute.
Also maybe creating getter and setter of the attributes is a good idea to access and change attributes of a class.

Related

TypeError: __init__() missing 1 required positional argument: 'lists'

I created a class, something like below -
class child:
def __init__(self,lists):
self.myList = lists
def find_mean(self):
mean=np.mean(self.myList)
return mean
and when I create an onject something like below -
obj=child()
it gives the error -
TypeError: __init__() missing 1 required positional argument: 'lists'
if I create object like below then it works well -
obj=child([44,22,55)
or If I create the class like below -
class child:
def find_mean(self,myList):
mean=np.mean(myList)
return mean
and then I create the object like below -
obj=child()
then also it works well, however I need to make it in the way I explained in the very begining. Can you please help me understand this context?
In the first example, the __init__ method expects two parameters:
self is automatically filled in by Python.
lists is a parameter which you must give it. It will try to assign this value to a new variable called self.myList, and it won't know what value it is supposed to use if you don't give it one.
In the second example, you have not written an __init__ method. This means that Python creates its own default __init__ function which will not require any parameters. However, the find_mean method now requires you to give it a parameter instead.
When you say you want to create it in the way you explained at the beginning, this is actually impossible: the class requires a value, and you are not giving it one.
Therefore, it is hard for me to tell what you really want to do. However, one option might be that you want to create the class earlier, and then add a list to it later on. In this case, the code would look like this:
import numpy as np
class Child:
def __init__(self, lists=None):
self.myList = lists
def find_mean(self):
if self.myList is None:
return np.nan
mean = np.mean(self.myList)
return mean
This code allows you to create the object earlier, and add a list to it later. If you try to call find_mean without giving it a list, it will simply return nan:
child = Child()
print(child.find_mean()) # Returns `nan`
child.myList = [1, 2, 3]
print(child.find_mean()) # Returns `2`
the code you have at the top of your question defines a class called child, which has one attribute, lists, which is assigned at the time of instance creation in the __init__ method. This means that you must supply a list when creating an instance of child.
class child:
def __init__(self, lists):
self.myList = lists
def find_mean(self):
mean=np.mean(self.myList)
return mean
# works because a list is provided
obj = child([44,22,55])
# does not work because no list is given
obj = child() # TypeError
If you create the class like in your second example, __init__ is no longer being explicitly specified, and as such, the object has no attributes that must be assigned at instance creation:
class child:
def find_mean(self, myList):
mean=np.mean(myList)
return mean
# does not work because `child()` does not take any arguments
obj = child([44,22,55]) # TypeError
# works because no list is needed
obj = child()
The only way to both have the myList attribute, and not need to specify it at creation would be to assign a default value to it:
class child:
def find_mean(self,myList=None):
mean=np.mean(myList)
return mean
# now this will work
obj = child()
# as will this
obj = child([24, 35, 27])

How instance attributes are passed to the decorator inner function?

I recently studied how decorators work in python, and found an example which integrates decorators with nested functions.
The code is here :
def integer_check(method):
def inner(ref):
if not isinstance(ref._val1, int) or not isinstance(ref._val2, int):
raise TypeError('val1 and val2 must be integers')
else:
return method(ref)
return inner
class NumericalOps(object):
def __init__(self, val1, val2):
self._val1 = val1
self._val2 = val2
#integer_check
def multiply_together(self):
return self._val1 * self._val2
def power(self, exponent):
return self.multiply_together() ** exponent
y = NumericalOps(1, 2)
print(y.multiply_together())
print(y.power(3))
My question is how the inner function argument("ref") accesses the instance attributes (ref._val1 and ref._val2)?
It seems like ref equals the instance but i have no idea how it happenes.
Let's first recall how a decorator works:
Decorating the method multiply_together with the decorator #integer_check is equivalent to adding the line: multiply_together = integer_check(multiply_together), and by the definition of multiply_together, this is equivalent to multiply_together = inner.
Now, when you call the method multiply_together, since this is an instance method, Python implicitly adds the class instance used to invoke the method as its first (an only, in this case) argument. But multiply_togethet is, actually,inner, so, in fact, inner is invoked with the class instance as an argument. This instance is mapped to the parameter ref, and through this parameter the function gets access to the required instance attributes.
well one explanation I found some time ago about the self argument was that this:
y.multiply_together()
is roughly the same as
NumericalOps.multiply_together(y)
So now that you use that decorator it returns the function inner which requires the ref argument so I see that roughly happen like this (on a lower level):
NumericalOps.inner(y)
Because inner "substitutes" multiply_together while also adding the extra functionality
inner replaces the original function as the value of the class attribute.
#integer_check
def multiply_together(self):
return self._val1 * self._val2
# def multiply_together(self):
# ...
#
# multiply_together = integer_check(multiply_together)
first defines a function and binds it to the name multiply_together. That function is then passed as the argument to integer_check, and then the return value of integer_check is bound to the name multiply_together. The original function is now only refernced by the name ref that is local to inner/multiply_together.
The definition of inner implies that integer_check can only be applied to functions whose first argument will have attributes named _val1 and _val2.

Python setattr() to function takes initial function name

I do understand how setattr() works in python, but my question is when i try to dynamically set an attribute and give it an unbound function as a value, so the attribute is a callable, the attribute ends up taking the name of the unbound function when i call attr.__name__ instead of the name of the attribute.
Here's an example:
I have a Filter class:
class Filter:
def __init__(self, column=['poi_id', 'tp.event'], access=['con', 'don']):
self.column = column
self.access = access
self.accessor_column = dict(zip(self.access, self.column))
self.set_conditions()
def condition(self, name):
# i want to be able to get the name of the dynamically set
# function and check `self.accessor_column` for a value, but when
# i do `setattr(self, 'accessor', self.condition)`, the function
# name is always set to `condition` rather than `accessor`
return name
def set_conditions(self):
mapping = list(zip(self.column, self.access))
for i in mapping:
poi_column = i[0]
accessor = i[1]
setattr(self, accessor, self.condition)
In the class above, the set_conditions function dynamically set attributes (con and don) of the Filter class and assigns them a callable, but they retain the initial name of the function.
When i run this:
>>> f = Filter()
>>> print(f.con('linux'))
>>> print(f.con.__name__)
Expected:
linux
con (which should be the name of the dynamically set attribute)
I get:
linux
condition (name of the value (unbound self.condition) of the attribute)
But i expect f.con.__name__ to return the name of the attribute (con) and not the name of the unbound function (condition) assigned to it.
Can someone please explain to me why this behaviour is such and how can i go around it?
Thanks.
function.__name__ is the name under which the function has been initially defined, it has nothing to do with the name under which it is accessed. Actually, the whole point of function.__name__ is to correctly identify the function whatever name is used to access it. You definitly want to read this for more on what Python's "names" are.
One of the possible solutions here is replace the static definition of condition with a closure:
class Filter(object):
def __init__(self, column=['poi_id', 'tp.event'], access=['con', 'don']):
self.column = column
self.access = access
self.accessor_column = dict(zip(self.access, self.column))
self.set_conditions()
def set_conditions(self):
mapping = list(zip(self.column, self.access))
for column_name, accessor_name in mapping:
def accessor(name):
print("in {}.accessor '{}' for column '{}'".format(self, accessor_name, column_name))
return name
# this is now technically useless but helps with inspection
accessor.__name__ = accessor_name
setattr(self, accessor_name, accessor)
As a side note (totally unrelated but I thought you may want to know this), using mutable objects as function arguments defaults is one of the most infamous Python gotchas and may yield totally unexpected results, ie:
>>> f1 = Filter()
>>> f2 = Filter()
>>> f1.column
['poi_id', 'tp.event']
>>> f2.column
['poi_id', 'tp.event']
>>> f2.column.append("WTF")
>>> f1.column
['poi_id', 'tp.event', 'WTF']
EDIT:
thank you for your answer, but it doesn't touch my issue here. My problem is not how functions are named or defined, my problem it that when i use setattr() and i set an attribute and i give it a function as it's value, i can access the value and perform what the value does, but since it's a function, why doesn't it return it's name as the function name
Because as I already explained above, the function's __name__ attribute and the name of the Filter instance attribute(s) refering to this function are totally unrelated, and the function knows absolutely nothing about the names of variables or attributes that reference it, as explained in the reference article I linked to.
Actually the fact that the object you're passing to setattr is a function is totally irrelevant, from the object's POV it's just a name and an object, period. And actually the fact you're binding this object (function or just whatever object) to an instance attribute (whether directly or using setattr(), it works just the same) instead of a plain variable is also totally irrelevant - none of those operation will have any impact on the object that is bound (except for increasing it's ref counter but that's a CPython implementation detail - other implementations may implement garbage collection diffently).
May I suggest you this :
from types import SimpleNamespace
class Filter:
def __init__(self, column=['poi_id', 'tp.event'], access=['con', 'don']):
self.column = column
self.access = access
self.accessor_column = dict(zip(self.access, self.column))
self.set_conditions()
def set_conditions(self):
for i in self.access:
setattr(self, i, SimpleNamespace(name=i, func=lambda name: name))
f = Filter()
print(f.con.func('linux'))
>>> linux
print(f.con.name)
>>> con
[edited after bruno desthuilliers's comment.]

How to store function in class attribute?

In my code I have a class, where one method is responsible for filtering some data. To allow customization for descendants I would like to define filtering function as a class attribute as per below:
def my_filter_func(x):
return x % 2 == 0
class FilterClass(object):
filter_func = my_filter_func
def filter_data(self, data):
return filter(self.filter_func, data)
class FilterClassDescendant(FilterClass):
filter_func = my_filter_func2
However, such code leads to TypeError, as filter_func receives "self" as first argument.
What is a pythonic way to handle such use cases? Perhaps, I should define my "filter_func" as a regular class method?
You could just add it as a plain old attribute?
def my_filter_func(x):
return x % 2 == 0
class FilterClass(object):
def __init__(self):
self.filter_func = my_filter_func
def filter_data(self, data):
return filter(self.filter_func, data)
Alternatively, force it to be a staticmethod:
def my_filter_func(x):
return x % 2 == 0
class FilterClass(object):
filter_func = staticmethod(my_filter_func)
def filter_data(self, data):
return filter(self.filter_func, data)
Python has a lot of magic within. One of those magics has something to do with transforming functions into UnboundMethod objects (when assigned to the class, and not to an class' instance).
When you assign a function (And I'm not sure whether it applies to any callable or just functions), Python converts it to an UnboundMethod object (i.e. an object which can be called using an instance or not).
Under normal conditions, you can call your UnboundMethod as normal:
def myfunction(a, b):
return a + b
class A(object):
a = myfunction
A.a(1, 2)
#prints 3
This will not fail. However, there's a distinct case when you try to call it from an instance:
A().a(1, 2)
This will fail since when an instance gets (say, internal getattr) an attribute which is an UnboundMethod, it returns a copy of such method with the im_self member populated (im_self and im_func are members of UnboundMethod). The function you intended to call, is in the im_func member. When you call this method, you're actually calling im_func with, additionally, the value in im_self. So, the function needs an additional parameter (the first one, which will stand for self).
To avoid this magic, Python has two possible decorators:
If you want to pass the function as-is, you must use #staticmethod. In this case, you will have the function not converted to UnboundMethod. However, you will not be able to access the calling class, except as a global reference.
If you want to have the same, but be able to access the current class (disregarding whether the function it is called from an instance or from a class), then your function should have another first argument (INSTEAD of self: cls) which is a reference to the class, and the decorator to use is #classmethod.
Examples:
class A(object):
a = staticmethod(lambda a, b: a + b)
A.a(1, 2)
A().a(1, 2)
Both will work.
Another example:
def add_print(cls, a, b):
print cls.__name__
return a + b
class A(object):
ap = classmethod(add_print)
class B(A):
pass
A.ap(1, 2)
B.ap(1, 2)
A().ap(1, 2)
B().ap(1, 2)
Check this by yourseld and enjoy the magic.

Persistent objects in recursive python functions

I am trying to write a recursive function that needs to store and modify an object (say a set) as it recurses. Should I use a global name inside the function? Another option is to modify or inherit the class of the parameter of the function so that it can keep this persistent object but I don't find it elegant. I could also use a stack if I would forgo the recursion altogether...
Is there a pythonic way of doing this? Could a generator do the trick?
Just pass through your persistent object through the recursive method.
def recursivemethod(obj_to_act_on, persistent_obj=None):
if persistent_obj == None:
persistent_obj = set()
# Act on your object
return recursivemethod(newobj, persistent_obj)
Objects are passed by reference. If you're only modifying an object, you can do that from within a recursive function and the change will be globally visible.
If you need to assign a variable inside a recursive function and see it after the function returns, then you can't just assign a local variable with =. What you can do is update a field of another object.
class Accumulator: pass
def foo():
# Create accumulator
acc = Accumulator()
acc.value = 0
# Define and call a recursive function that modifies accumulator
def bar(n):
if (n > 0): bar(n-1)
acc.value = acc.value + 1
bar(5)
# Get accumulator
return acc.value
Pass the set into the recursive method as an argument, then modify it there before passing it to the next step. Complex objects are passed by reference.
If it's a container (not an immutable data type), you can pass the object through:
import random
def foo(bar=None, i=10):
if bar is None:
bar = set()
if i == 0:
return bar
bar |= set(random.randint(1, 1000) for i in xrange(10))
return foo(bar, i - 1)
random_numbers_set = foo()
(Don't ask me what that's meant to do... I was just typing random things :P)
If the object you pass is mutable then changes to it in deeper recursions will be seen in earlier recursions.
Use a variable global to the function.
Pass the object around as an accumulator:
def recurse(foo, acc=None):
acc = {}
recurse(acc)

Categories