how reach the unnamed object in python - python

#first way
class temp:
def __init__(self, name):
self.name = name
object1 = temp("abolfazl")
print(object1)
#second way
class temp:
def __init__(self, name):
self.name = name
print(temp("abolfazl"))
both do the same action(I guess :)), creating the instance of a temp class but if we do in a second way we can't retreive that object or i guess so
could you please tell me what are the differences? and what did "self" do i thought it does something with "object1" in way one code but now I confused

Objects have a reference count; when the reference count reaches 0, the object is destroyed.
In your first example, object1 = temp("...") creates the object and sets its reference count to 1: object1 is that reference. When you pass it to print, it temporarily has a reference count of two, as the print parameter the object is bound to is a second reference. Once print returns, the parameter goes out of scope and the reference count is decremented back to 1.
In your second example, the only reference to the temp instance is the parameter in print. When print returns, the reference count drops from 1 to 0, and the object is destroyed.
self is just the parameter bound to the new instance for the duration of the call to temp.__init__.

Related

Pass a class atrribute to function outside of the class

I've created a class called "Test" with attributes "hp" and "max_hp". Now I want to pass these attributes as arguments to an outside function (it is outside the class since I want to use it with other instances of other classes), and add 1 to the attribute if it is not equal to the max attribute. My problem is that even though the function is called and test.hp and test.max_hp passed as arguments, they don't change.
Here's the code:
class Test:
def __init__(self):
self.hp = 10
self.max_hp = self.hp
def recovery_action(attribute, max_attribute):
if attribute == max_attribute:
pass
else:
attribute += 1
test = Test()
test.hp -= 5
print(f'test.hp - {test.hp}')
recovery_action(test.hp, test.max_hp)
print(f'test.hp after recovery - {test.hp}')
The problem is that the output looks like this:
test.hp - 5
test.hp after recovery - 5
I passed the test.hp as an argument and it wasn't equal to test.max_hp, so 1 should have been added to it - but it stayed the same. What am I doing wrong here?
The problem is you are not really changing the attribute in your function.
Basically what you do is call recovery_action(5,10).
If you want to actually change the attribute, you can return the result and assign it, like that:
def recovery_action(attribute, max_attribute):
if attribute != max_attribute:
return attribute + 1
return attribute
Then you can run:
test.hp = recovery_action(test.hp, test.max_hp)
The variable attribute lives in the scope of recovery_action. However, what you would want is to actually reference the class, so that changes are saved outside the scope of the function. A solution could be:
def recovery_action(test: Test):
if test.hp == test.max_hp:
pass
else:
test.hp += 1
This function passes an instance of Test to the function (hence the : Test part). The rest of the function is straightforward. Now, you address the variable in the object you pass to the function and thus remains changed after the function ends.
There is no object representing the attribute itself; test.max_hp is just an expression that evaluates to the value of the attribute. Since an int is immutable, you can't change the value by passing it as an argument. You would need to pass the object and the name of the attribute as two separate arguments, and have the function operate on the object directly.
def recovery_action(obj, attribute, max_):
if getattr(obj, attribute) == max_:
pass
else:
setattr(obj, attribute, attribute + 1)
recovery_action(test, 'hp', test.max_hp)
Note that your function doesn't care that the maximum value comes from an object attribute; it only cares that it is some integer.
Changing the argument of a function will not change the variable you have passed in to that function. You can return the changed variable and reassign the class attribute.
Also maybe creating getter and setter of the attributes is a good idea to access and change attributes of a class.

TypeError: __init__() missing 1 required positional argument: 'lists'

I created a class, something like below -
class child:
def __init__(self,lists):
self.myList = lists
def find_mean(self):
mean=np.mean(self.myList)
return mean
and when I create an onject something like below -
obj=child()
it gives the error -
TypeError: __init__() missing 1 required positional argument: 'lists'
if I create object like below then it works well -
obj=child([44,22,55)
or If I create the class like below -
class child:
def find_mean(self,myList):
mean=np.mean(myList)
return mean
and then I create the object like below -
obj=child()
then also it works well, however I need to make it in the way I explained in the very begining. Can you please help me understand this context?
In the first example, the __init__ method expects two parameters:
self is automatically filled in by Python.
lists is a parameter which you must give it. It will try to assign this value to a new variable called self.myList, and it won't know what value it is supposed to use if you don't give it one.
In the second example, you have not written an __init__ method. This means that Python creates its own default __init__ function which will not require any parameters. However, the find_mean method now requires you to give it a parameter instead.
When you say you want to create it in the way you explained at the beginning, this is actually impossible: the class requires a value, and you are not giving it one.
Therefore, it is hard for me to tell what you really want to do. However, one option might be that you want to create the class earlier, and then add a list to it later on. In this case, the code would look like this:
import numpy as np
class Child:
def __init__(self, lists=None):
self.myList = lists
def find_mean(self):
if self.myList is None:
return np.nan
mean = np.mean(self.myList)
return mean
This code allows you to create the object earlier, and add a list to it later. If you try to call find_mean without giving it a list, it will simply return nan:
child = Child()
print(child.find_mean()) # Returns `nan`
child.myList = [1, 2, 3]
print(child.find_mean()) # Returns `2`
the code you have at the top of your question defines a class called child, which has one attribute, lists, which is assigned at the time of instance creation in the __init__ method. This means that you must supply a list when creating an instance of child.
class child:
def __init__(self, lists):
self.myList = lists
def find_mean(self):
mean=np.mean(self.myList)
return mean
# works because a list is provided
obj = child([44,22,55])
# does not work because no list is given
obj = child() # TypeError
If you create the class like in your second example, __init__ is no longer being explicitly specified, and as such, the object has no attributes that must be assigned at instance creation:
class child:
def find_mean(self, myList):
mean=np.mean(myList)
return mean
# does not work because `child()` does not take any arguments
obj = child([44,22,55]) # TypeError
# works because no list is needed
obj = child()
The only way to both have the myList attribute, and not need to specify it at creation would be to assign a default value to it:
class child:
def find_mean(self,myList=None):
mean=np.mean(myList)
return mean
# now this will work
obj = child()
# as will this
obj = child([24, 35, 27])

Define a constant object in python

I defined a class Factor in the file factor.py:
class Factor:
def __init__(self, var, value):
self.var = var # hold variable names
self.value = value # hold probability values
For convenience and code cleanliness, I want to define a constant variable and be able to access it as Factor.empty
empty = Factor([], None)
What is the common way to do this? Should I put in the class definition, or outside? I'm thinking of putting it outside the class definition, but then I wouln't be able to refer to it as Factor.empty then.
If you want it outside the class definition, just do this:
class Factor:
...
Factor.empty = Factor([], None)
But bear in mind, this isn't a "constant". You could easily do something to change the value of empty or its attributes. For example:
Factor.empty = something_else
Or:
Factor.empty.var.append("a value")
So if you pass Factor.empty to any code that manipulates it, you might find it less empty than you wanted.
One solution to that problem is to re-create a new empty Factor each time someone accesses Factor.empty:
class FactorType(type):
#property
def empty(cls):
return Factor([], None)
class Factor(object):
__metaclass__ = FactorType
...
This adds an empty property to the Factor class. You are safe to do what you want with it, as every time you access empty, a new empty Factor is created.

Class assigning index to object using static class variable

I have a class which represents an object to be kept in a set. I would like the class itself to remember how many it has created so that when you call SetObject() and __init__() a new object is created, which receives a unique index. Maybe something like this
class SetObject(object):
# static class variable
object_counter = 0
def __init__(self, params):
self.params=params
self.index = self.get_index()
def get_index(self):
object_counter += 1
return object_counter-1
a = SetObject(paramsa)
b = SetObject(paramsb)
print a.index
print b.index
would produce
0
1
or something like this. Currently it seems that this approach gives a "variable referenced before assignment" error.
You need to write:
def get_index(self):
SetObject.object_counter += 1
return SetObject.object_counter-1
otherwise it would only work if object_counter was a global variable.
You need to use a reference to the class to refer to it's variables; you could perhaps use a class method (with the #classmethod decorator), but there is really no need to.
Better use itertools.count() to get a fool-proof 'static' counter; no need to reassign back to the class attribute then:
import itertools
class SetObject(object):
object_counter = itertools.count().next
def __init__(self, params):
self.params=params
self.index = self.object_counter()
(code above assumes Python 2; on Python 3 iterables do not have a .next method and you'd need to use functools.partial(next, itertools.count()) instead).
Because the counter is an iterator, we don't need to assign to SetObject.object_counter at all. Subclasses can provide their own counter as needed, or re-use the parent class counter.
The line
object_counter += 1
translates to
object_counter = object_counter + 1
When you assign to a variable inside a scope (e.g. inside a function), Python assumes you wanted to create a new local variable. So it marks object_counter as being local, which means that when you try to get its value (to add one) you get a "not defined" error.
To fix it, tell Python where to look up object_counter. In general you can use the global or nonlocal keywords for this, but in your case you just want to look it up on the class:
self.__class__.object_counter += 1

On second initialization of an object, why is __init__ called before __del__?

Consider the following example code
class A:
def __init__(self, i):
self.i = i
print("Initializing object {}".format(self.i))
def __del__(self):
print("Deleting object {}".format(self.i))
for i in [1, 2]:
a = A(i)
Creating the object within the loop was intended to assure that the destructor of A would be called before the new A object would be created. But apparently the following happens:
Initializing object 1
Initializing object 2
Deleting object 1
Deleting object 2
Why is the destructor of object 1 only called after the new object has been initialized? Is this an intended behaviour? I know that the for loop has no own scope in python. In C++, for example, the destructor of 1 would certainly be called before the constructor for object 2 (at least if the object is declared within the loop).
In my program I want to assure that the old object is deleted before the new one is created. Is there another possibility apart from deleting a explicitly at the end of the for loop?
Thanks in advance.
Creation of the second object happens before the name is rebound and the first object is disposed of.
The first A is instantiated.
a is bound.
The second A is instantiated.
a is rebound, and the first A is disposed of.
The program ends, and the second A is disposed of.
You can't rely on the garbage collector's implementation details when planning lifetime dependencies. You need to do this explicitly one way or another.
Context managers spring to mind, for example:
from contextlib import contextmanager
#contextmanager
def deleting(obj):
try:
yield
finally:
del(obj)
class A:
def __init__(self, i):
self.i = i
print("Initializing object {}".format(self.i))
def __del__(self):
print("Deleting object {}".format(self.i))
for i in [1,2]:
with deleting(A(i)) as obj:
pass
print
for i in [1,2]:
a = A(i)
This produces the following output:
Initializing object 1
Deleting object 1
Initializing object 2
Deleting object 2
Initializing object 1
Initializing object 2
Deleting object 1
Deleting object 2
Assuming that you want the object to be defined as its final value when the loop exits, don't delete the object explicitly at the end of the for loop, do so at the beginning of the loop, like this:
class A:
def __init__(self, i):
self.i = i
print("Initializing object {}".format(self.i))
def __del__(self):
print("Deleting object {}".format(self.i))
for i in [1, 2, 3]:
a = None
a = A(i)
That prints:
Initializing object 1
Deleting object 1
Initializing object 2
Deleting object 2
Initializing object 3
Deleting object 3
(Note: Ignatio is right about why it works the way it works, but KennyTM is right, too, that to make it more obvious what is happening you should make it go at least three times through the loop.)

Categories