Here's a snippet of code.
class TestClass:
def __init__(self):
self.a = "a"
print("calling init")
#property
def b(self):
b = "b"
print("in property")
return b
test_obj = TestClass()
print("a = {} b = {}".format(test_obj.a,test_obj.b))
I'm trying to understand when the variable b defined inside test_obj gets its value of "b".
As you can see from the below screenshot, the statement on line 13 is yet to be evaluated/executed but already the value of b for test_obj has been initialized. Debugging this by placing a breakpoint on literally every single line didn't help me understand how this is happening.
Can someone please explain this to me ?
More likely, the IDE is trying to show you what the value of test_obj.b is. For that it gets the value from test_obj.b. Since it doesn't make much of a difference whether b is an attribute or a #property, the debugger essentially just does test_obj.b for you, which gives it the value 'b'.
The function def b works exactly as you might expect from any other ordinary function; it's just that the debugger/IDE implicitly invokes it for you.
Related
I have a class A with some member functions that all do the same thing.
class A:
def a():
... boilerplate code ...
b = c = d = a
For debugging reasons, I would like to know the name of each member function at runtime. But since they all point to the same memory address, they will have the same __name__ attribute and I cannot figure out a way to distinguish between A.a and A.b just by looking at the object.
a = A.a
b = A.b
a.__name__ == b.__name__ # this is true
# how do I tell the difference between a and b?
Is there a way to achieve this without manually creating the functions b, c and d with the same boilerplate code?
No. Objects and names in Python live in separate spaces. There's only one function object there, and the function object doesn't know through what name it was conjured.
If you were a masochist, I suppose it would be possible to get a traceback and look at the line of code that called you, but that's just not practical.
You could do something like:
def reala(self,me=None):
pass
def a(self):
return reala('a')
def b(self):
return reala('b')
...
In an attempt to format my code a little better to avoid redundancies in multiple methods doing the same stuff for different classes, I'm faced with the following problem :
# problematic method
def a(self, b, c):
result = test(b)
if (result):
c = None # <- local variable c is assigned but never used
return
# main code
obj.a(obj.b, obj.c)
And the obj's variable c is never set to None.
The current working code that I'm trying to reformat is the following :
# working method
def a(self):
result = test(self.b)
if (result):
self.c = None
return
# main code
obj.a()
See Why can a function modify some arguments as perceived by the caller, but not others? for an explanation why reassigning c in a doesn't update obj.c.
If you want to pass a reference to an object attribute to a function, you have to pass it two things:
the object
the name of the attribute (i.e. a string)
You can then dynamically set that attribute from inside the function:
def a(self, b, name_of_c):
result = test(b)
if result:
setattr(self, name_of_c, None)
obj.a(obj.b, 'c')
At the end of your def a routine, assign 'c' to itself,
it will not make a difference when the code is executed,
since it is declared after the return... this will get
rid of the annoying warning. And it's simple to
understand when reviewing in the future
def a(self, b, c):
result = test(b)
if (result):
c = None # <- local variable c is assigned but never used
return
#This will never run since it is after the return
#but it will get rid of the annoying assigned but
#never used warning
c = c
I want to check that my function has no side-effects, or only side-effects affecting precise variables. Is there a function to check that it actually has no side-effects (or side-effects on only certain variables)?
If not, how can I go about writing my own as follows:
My idea would be something like this, initialising, calling the function under test, and then calling the final method:
class test_side_effects(parents_scope, exclude_variables=[]):
def __init__():
for variable_name, variable_initial in parents_scope.items():
if variable_name not in exclude_variables:
setattr(self, "test_"+variable_name, variable_initial)
def final(self, final_parents_scope):
for variable_name, variable_final in final_parents_scope.items():
if variable_name[:5] is "test_" and variable_name not in exclude_variables:
assert getattr(self, "test_"+variable_name) is variable_final, "Unexpected side effect of %s from %s to %s" % (variable_name, variable_initial, variable_final)
#here parents_scope should be inputted as dict(globals(),**locals())
I'm unsure if this is precisely the dictionary I want...
Finally, should I be doing this? If not, why not?
I'm not familiar with the nested function testing library that you might be writing a test with, but it seems like you should really be using classes here (i.e. TestCase in many frameworks).
If your question then, is relating to getting the parent variables in your TestCase, you could get the __dict__ (It wasn't clear to me what the "Parent" variables you were referring to.
UPDATE: #hayden posted a gist to show the use of parent variables:
def f():
a = 2
b = 1
def g():
#a = 3
b = 2
c = 1
print dict(globals(), **locals()) #prints a=1, but we want a=2 (from f)
g()
a = 1
f()
If this is converted to a dictionary, then the problem is solvable with:
class f(object): # could be unittest TestCase
def setUp(self, a=2, b=1):
self.a = a
self.b = b
def g(self):
#a = 3
b = 2
c = 1
full_scope = globals().copy()
full_scope.update(self.__dict__)
full_scope.update(locals())
full_scope.pop('full_scope')
print full_scope # print a = 1
my_test = f()
my_test.setUp(a=1)
my_test.g()
You are right to look for a tool which has already implemented this. I am hopeful that somebody else will have an already implemented solution.
this works in the desired way:
class d:
def __init__(self,arg):
self.a = arg
def p(self):
print "a= ",self.a
x = d(1)
y = d(2)
x.p()
y.p()
yielding
a= 1
a= 2
i've tried eliminating the "self"s and using a global statement in __init__
class d:
def __init__(self,arg):
global a
a = arg
def p(self):
print "a= ",a
x = d(1)
y = d(2)
x.p()
y.p()
yielding, undesirably:
a= 2
a= 2
is there a way to write it without having to use "self"?
"self" is the way how Python works. So the answer is: No! If you want to cut hair: You don't have to use "self". Any other name will do also. ;-)
Python methods are just functions that are bound to the class or instance of a class. The only difference is that a method (aka bound function) expects the instance object as the first argument. Additionally when you invoke a method from an instance, it automatically passes the instance as the first argument. So by defining self in a method, you're telling it the namespace to work with.
This way when you specify self.a the method knows you're modifying the instance variable a that is part of the instance namespace.
Python scoping works from the inside out, so each function (or method) has its own namespace. If you create a variable a locally from within the method p (these names suck BTW), it is distinct from that of self.a. Example using your code:
class d:
def __init__(self,arg):
self.a = arg
def p(self):
a = self.a - 99
print "my a= ", a
print "instance a= ",self.a
x = d(1)
y = d(2)
x.p()
y.p()
Which yields:
my a= -98
instance a= 1
my a= -97
instance a= 2
Lastly, you don't have to call the first variable self. You could call it whatever you want, although you really shouldn't. It's convention to define and reference self from within methods, so if you care at all about other people reading your code without wanting to kill you, stick to the convention!
Further reading:
Python Classes tutorial
When you remove the self's, you end up having only one variable called a that will be shared not only amongst all your d objects but also in your entire execution environment.
You can't just eliminate the self's for this reason.
Have a look a this simple example. I don't quite understand why o1 prints "Hello Alex" twice. I would think that because of the default self.a is always reset to the empty list. Could someone explain to me what's the rationale here? Thank you so much.
class A(object):
def __init__(self, a=[]):
self.a = a
o = A()
o.a.append('Hello')
o.a.append('Alex')
print ' '.join(o.a)
# >> prints Hello Alex
o1 = A()
o1.a.append('Hello')
o1.a.append('Alex')
print ' '.join(o1.a)
# >> prints Hello Alex Hello Alex
Read this Pitfall about mutable default function arguments:
http://www.ferg.org/projects/python_gotchas.html
In short, when you define
def __init__(self,a=[])
The list referenced by self.a by default is defined only once, at definition-time, not run-time. So each time you call o.a.append or o1.a.append, you are modifying the same list.
The typical way to fix this is to say:
class A(object):
def __init__(self, a=None):
self.a = [] if a is None else a
By moving self.a=[] into the body of the __init__ function, a new empty list is created at run-time (each time __init__ is called), not at definition-time.
Default arguments in Python, like:
def blah(a="default value")
are evaluated once and re-used in every call, so when you modify a you modify a globally. A possible solution is to do:
def blah(a=None):
if a is None
a = []
You can read more about this issue on: http://www.ferg.org/projects/python_gotchas.html#contents_item_6
Basically, never use mutable objects, like lists or dictionaries on a default value for an argument.