Consider these two classes:
class Test(int):
difference = property(lambda self: self.__sub__)
class Test2(int):
difference=lambda self: self.__sub__
Is there any difference between these two classes? New: If so, what is the purpose of using the property to store a lambda function that returns another function?
Update: Changed the question to what I should have asked in the first place. Sorry. Even though I can now know the solution from the answers, it would be unfair for me to do a self answer in these circumstances. (without leaving the answer for a few days at least).
Update 2: Sorry, I wasn't clear enough again. The question was about the particular construction, not properties in general.
For Test1, you could use .difference - for Test2, you'd need to use .difference() instead.
As for why you might use it, a potential use would be to replace something that was previously directly stored as a property with a dynamic calculation instead.
For instance, if you used to store property obj.a, but then you expanded your implementation so that it knew instead properties obj.b and obj.c that could be used to calculate a, but could also be used to calculate different things. If you still wanted to provide backwards-compat with things that used the previous object form, you could implement obj.a as a property() that calculated a based on b and c and it'd behave to those older code fragments as it previously did, with no other code modification needed.
Edit: Ah, I see. You are asking why anybody would do exactly the code above. It's not, in fact a question about why to make a lambda or a property at all, it's not a question of the differences between the two examples, and not even why you want to make a property out of a lambda.
Your question is "Why would anybody make a property of a lambda that just returns self.__sub__".
And the answer is: One wouldn't.
Let's assume somebody wants to do this:
>>> foo = MyInt(8)
>>> print foo.difference(7)
1
So he tries to accomplish it by this class:
class MyInt(int):
def difference(self, i):
return self - i
But that's two lines, and since he is a Ruby programmer and believes that good code is code that has few lines of code, he changes it to:
class MyInt(int):
difference = int.__sub__
To save one line of code. But apparently, things are still too easy. He learned in Ruby that a problem is not properly solved unless you use anonymous code blocks, so he will try to use Pythons nearest equivalent, lambdas, for absolutely no reason:
class MyInt(int):
difference=lambda self, i: self - i
All these works. But things are still WAY to uncomplicated, so instead he decides to make things more complex, by not doing the calculation, but returning the sub method:
class MyInt(int):
difference=lambda self: self.__sub__
Ah, but that doesn't work, because he needs to call difference to get the sub-method:
>>> foo = MyInt(8)
>>> print foo.difference()(7)
1
So he makes it a property:
class MyInt(int):
difference=property(lambda self: self.__sub__)
There. Now he has found the maximum complexity to solve a non-problem.
But normal people wouldn't do any of these, but do:
>>> foo = 8
>>> print foo - 7
1
People have given there opinion without analyzing it, it can be better solved by python itself, below is the code to check the difference
import difflib
from pprint import pprint
s1 = """
class Test(int):
difference=property(lambda self: self.__sub__)
"""
s2 = """
class Test(int):
difference=lambda self: self.__sub__
"""
d = difflib.Differ()
print "and the difference is..."
for c in d.compare(s1, s2):
if c[0] in '+-': print c[1:],
and as expected it says
and the difference is...
p r o p e r t y ( )
Yes, in one case difference is a property. If you are asking what a property is, you can see it as a method that gets automatically called.
Yes, in one case difference is a property
Purpose of property can be
1.
To provide get/set hooks while accessing an attribute
e.g. if you used to have class with attribute a, later on you want to do something else when it is set, you can convert that attribute to property without affecting the interface or how users use your class. So in the example below class A and B are exactly same for a user but internally in B you can do many things in get/setX
class A(object):
def __init__(self):
self.x = 0
a = A()
a.x = 1
class B(object):
def __init__(self):
self.x = 0
def getX(self): return self._x
def setX(self, x): self._x = x
x = property(getX, setX)
b = B()
B.x = 1
2.
As implied in 1, property is a better alternative to get/set calls, so instead of getX, setX user uses less verbose self.x and self.x = 1, though personally I never make a property just for getting or setting a attribute, if need arises it can be done later on as shown in #1/
as far as difference in concerned, property provide you with get/set/del for an atribute, but in the example you have given a method(lambda or proper function) can only be used to do one of get/set or del, so you will need three such lambdas differenceSet, differenceGet, differenceDel
Related
I heard from one guy that you should not use magic methods directly. and I think in some use cases I would have to use magic methods directly. So experienced devs, should I use python magic methods directly?
I intended to show some benefits of not using magic methods directly:
1- Readability:
Using built-in functions like len() is much more readable than its relevant magic/special method __len__(). Imagine a source code full of only magic methods instead of built-in function... thousands of underscores...
2- Comparison operators:
class C:
def __lt__(self, other):
print('__lt__ called')
class D:
pass
c = C()
d = D()
d > c
d.__gt__(c)
I haven't implemented __gt__ for neither of those classes, but in d > c when Python sees that class D doesn't have __gt__, it checks to see if class C implements __lt__. It does, so we get '__lt__ called' in output which isn't the case with d.__gt__(c).
3- Extra checks:
class C:
def __len__(self):
return 'boo'
obj = C()
print(obj.__len__()) # fine
print(len(obj)) # error
or:
class C:
def __str__(self):
return 10
obj = C()
print(obj.__str__()) # fine
print(str(obj)) # error
As you see, when Python calls that magic methods implicitly, it does some extra checks as well.
4- This is the least important but using let's say len() on built-in data types such as str gives a little bit of speed as compared to __len__():
from timeit import timeit
string = 'abcdefghijklmn'
print(timeit("len(string)", globals=globals(), number=10_000_000))
print(timeit("string.__len__()", globals=globals(), number=10_000_000))
output:
0.5442426
0.8312854999999999
It's because of the lookup process(__len__ in the namespace), If you create a bound method before timing, it's gonna be faster.
bound_method = string.__len__
print(timeit("bound_method()", globals=globals(), number=10_000_000))
I'm not a senior developer, but my experience says that you shouldn't call magic methods directly.
Magic methods should be used to override a behavior on your object. For example, if you want to define how does your object is built, you override __init__. Afterwards when you want to initialize it, you use MyNewObject() instead of MyNewObject.__init__().
For me, I tend to appreciate the answer given by Alex Martelli here:
When you see a call to the len built-in, you're sure that, if the program continues after that rather than raising an exception, the call has returned an integer, non-negative, and less than 2**31 -- when you see a call to xxx.__len__(), you have no certainty (except that the code's author is either unfamiliar with Python or up to no good;-).
If you want to know more about Python's magic methods, I strongly recommend taking a look on this documentation made by Rafe Kettler: https://rszalski.github.io/magicmethods/
No you shouldn't.
it's ok to be used in quick code problems like in hackerrank but not in production code. when I asked this question I used them as first class functions. what I mean is, I used xlen = x.__mod__ instead of xlen = lamda y: x % y which was more convenient. it's ok to use these kinda snippets in simple programs but not in any other case.
I am reading this article. In this article a line says :
For a Class C, an instance x of C and a method m of C the following
three method calls are equivalent:
type(x).m(x, ...)
C.m(x, ...)
x.m(...)
I tried to convert this statement into program like this :
class C:
def __init__(self,a,c):
self.a=a
self.b=c
def m(self):
d=self.a+self.b
x=C(1,2)
x.m()
print(type(x).m(x))
print(C.m(x))
print(x.m())
But i am getting no clue what these three methods meant and how they are working ?? If my program is using method wrong then please correct it.
edit
I am not asking for modifications for this code , I am asking how those three methods are used and provide one example with each method calls.
If you can provide proper example for each three method that would be very helpful for me.
If using python 2.7, you should derive C from object in order to get the correct type with type(x), which knows the method m.
class C(object):
def __init__(self,a,c):
self.a=a
self.b=c
def m(self):
return self.a+self.b
x=C(1,2)
x.m()
print(type(x).m(x))
print(C.m(x))
print(x.m())
I think in python 3, this is implicit. And yes, instead of calculating d, I just return the result - so you see something.
Edit regarding your clarification:
The three ways to call the method are shown for illustration. I would not see any obvious reason for not using x.m() if possible. But in python, that is a shortcut for: Call the method m of the type of x on the instance x.
The type(x).m(x) is the most literal way to write what is going on. Now, type(x) is C (in python 3 or with new-style classes - derived from object - at least, else instance), so the first and second way of writing are equivalent as well.
Your m isn't returning anything.
You want
def m(self):
return self.a + self.b
Most likely within your C class
I am using some open-source python code that I need to make a slight modification to. Here is the starting situation.
Starting Situation
class BaseGatherer:
def get_string(self):
#a bunch of stuff I don't need to modify
return self.my_string #set with a hard value in __init__()
class BaseDoer:
def do_it(self, some_gatherer):
#a bunch of stuff I don't need to modify
string_needed = some_gatherer.get_string()
self.do_stuff(string_needed)
Externally, this would get used as follows:
my_gatherer = BaseGatherer()
my_doer = BaseDoer()
my_doer.do_it(my_gatherer)
What I need to change
What I need to do is two things:
Have BaseGatherer::get_string() return it's my_string with an inserted modification that changes each time it gets called. For example, the first time it gets called I get my_string[0:1] + 'loc=0' + my_string[1:], the second time I get 'my_string[0:1] + 'loc=10' + my_string[1:], etc.
Modify BaseDoer::do_it() to call BaseDoer::do_stuff() in a loop (until some stopping condition), each time setting string_needed with some_gatherer.get_string() which by #1 returns a different string with each call.
with the following restriction
The base code I am using is regularly updated and I don't want to modify that code at all; I want to be able to clone the repo I get it from and only possibly have to modify my "extended" code. It's ok to assume the names of the BaseGatherer and BaseDoer classes don't change in the base code, nor do the names of the methods I care about here, though some auxiliary methods that I don't need to modify will get updated (which is key for me).
My Question
My main question is what is the best, most Pythonic, way to do this?
My Attempt
Given the restriction I mentioned, my first inclination is to write derived classes of both BaseGatherer and BaseDoer which make the changes I need by writing new versions of the get_string() and do_it() methods respectively. But, I have a feeling that I should use function decorators to do this. I've read up on them a little and I get the basic idea (they wrap a function so that you can control/modify parameters passed to or values returned from the function you are wrapping without modifying that function?; please correct me if I am missing something crucial). But, I don't know how to implement this in the derived class, neither syntactically nor logically. For example, do I have to give the function decorator that I write a #classmethod decorator? If so, why?
Here is what I did. It works but I want to learn and understand a) what is the right way to do what I want and b) how to actually do it.
class DerivedGatherer(BaseGatherer):
def __init__(self):
super(DerivedGatherer, self).__init__() #I'm in Python 2.7 :(
self.counter = 0 #new variable not in BaseGatherer
#DON'T override BaseGatherer::get_string(), just write a new function
def get_string_XL(self):
self.counter += 10
return self.my_string[0:1] + 'loc=' + str(self.counter) + self.my_string[1:]
class DerivedDoer(BaseDoer):
def do_it_XL(self, some_derived_gatherer):
while(~some_stop_condition()):
string_needed = some_derived_gatherer.get_string_XL()
self.do_stuff(string_needed)
I would then call it just as above but create derived instances and call their XL methods instead of the base class one's.
But, there are problems with this that don't satisfy my goal/requirement above: both BaseGatherer::get_string() and BaseDoer::do_it() perform a lot of other functionality which I would have to just copy into my new functions (see the comment in them). This means when the code I'm using gets updated I have to do the copying to update my derived classes. But, I am ONLY changing what you see here: inserting something into the string that changes with each call and putting a loop at the end of do_it(). This is why I have a feeling that function decorators are the way to go. That I can wrap both get_string() and do_it() so that for the former I can modify the string with each call by looking at a local counter and for the second I can just have a loop that calls the base do_it() in a loop and passes a different string each time (it's ok to call do_it() multiple times).
Any help with suggestions on the best way to do this and how would be very greatly appreciated.
Thank you!
This is most easily done with plain inheritance, no repetition of code:
class DerivedGatherer(BaseGatherer):
def __init__(self):
super(DerivedGatherer, self).__init__()
self.counter = 0
def get_string(self):
self.counter += 10
super_string = super(DerivedGatherer, self).get_string()
return super_string[0:1] + 'loc=' + str(self.counter) + super_string[1:]
class DerivedDoer(BaseDoer):
def do_it(self, some_derived_gatherer):
while(~some_stop_condition()):
super(DerivedDoer, self).do_it(some_derived_gatherer)
As Martijn has explained to you, you can easily call a method in the base class from a method in the subclass. You can do it with super to insure you get the entire class structure as he shows. For illustration, though, if you are have a simple hierarchy with only one superclass and one subclass, this is how you can do it:
>>> class Foo(object):
... def xxx(self):
... return "Foo.xxx was called"
...
>>> class Bar(Foo):
... def xxx(self):
... return "Bar.xxx was called and then %s" % Foo.xxx(self)
...
>>> Bar().xxx()
'Bar.xxx was called and then Foo.xxx was called'
Having a class
class A(object):
z = 0
def Func1(self):
return self.z
def Func2(self):
return A.z
Both methods (Func1 and Func2) give the same result and are only included in this artificial example to illustrate the two possible methods of how to address z.
The result of Func* would only differ if an instance would shadow z with something like self.z = None.
What is the proper python way to access the class variable z using the syntax of Func1 or Func2?
I would say that the proper way to get access to the variable is simply:
a_instance.z #instance variable 'z'
A.z #class variable 'z'
No need for Func1 and Func2 here.
As a side note, if you must write Func2, it seems like a classmethod might be appropriate:
#classmethod
def Func2(cls):
return cls.z
As a final note, which version you use within methods (self.z vs. A.z vs. cls.z with classmethod) really depends on how you want your API to behave. Do you want the user to be able to shadow A.z by setting an instance attribute z? If so, then use self.z. If you don't want that shadowing, you can use A.z. Does the method need self? If not, then it's probably a classmethod, etc.
I would usually use self.z, because in case there are subclasses with different values for z it will choose the "right" one. The only reason not to do that is if you know you will always want the A version notwithstanding.
Accessing via self or via a classmethod (see mgilson's answer) also facilitates the creating of mixin classes.
If you don't care about value clobbering and things like that, you're fine with self.z. Otherwise, A.z will undoubtedly evaluate to the class variable. Beware, though, about what would happen if a subclass B redefines z but not Func2:
class B(A):
z = 7
b = B()
b.Func2() # Returns 0, not 7
Which is quite logical, after all. So, if you want to access a class variable in a, somehow, polymorphic way, you can just do one of the following:
self.__class__.z
type(self).z
According to the documentation, the second form does not work with old-style classes, so the first form is usually more comptaible across Python 2.x versions. However, the second form is the safest one for new-style classes and, thus, for Python 3.x, as classes may redefine the __class__ attribute.
For specific debugging purposes I'd like to wrap the del function of an arbitrary object to perform extra tasks like write the last value of the object to a file.
Ideally I want to write
monkey(x)
and it should mean that the final value of x is printed when x is deleted
Now I figured that del is a class method. So the following is a start:
class Test:
def __str__(self):
return "Test"
def p(self):
print(str(self))
def monkey(x):
x.__class__.__del__=p
a=Test()
monkey(a)
del a
However if I want to monkey specific objects only I suppose I need to dynamically rewrite their class to a new one?! Moreover I need to do this anyway, since I cannot access del of built-in types?
Anyone knows how to implement that?
While special 'double underscore' methods like __del__, __str__, __repr__, etc. can be monkey-patched on the instance level, they'll just be ignored, unless they are called directly (e.g., if you take Omnifarious's answer: del a won't print a thing, but a.__del__() would).
If you still want to monkey patch a single instance a of class A at runtime, the solution is to dynamically create a class A1 which is derived from A, and then change a's class to the newly-created A1. Yes, this is possible, and a will behave as if nothing has changed - except that now it includes your monkey patched method.
Here's a solution based on a generic function I wrote for another question:
Python method resolution mystery
def override(p, methods):
oldType = type(p)
newType = type(oldType.__name__ + "_Override", (oldType,), methods)
p.__class__ = newType
class Test(object):
def __str__(self):
return "Test"
def p(self):
print(str(self))
def monkey(x):
override(x, {"__del__": p})
a=Test()
b=Test()
monkey(a)
print "Deleting a:"
del a
print "Deleting b:"
del b
del a deletes the name 'a' from the namespace, but not the object referenced by that name. See this:
>>> x = 7
>>> y = x
>>> del x
>>> print y
7
Also, some_object.__del__ is not guaranteed to be called at all.
Also, I already answered your question here (in german).
You can also inherit from some base class and override the __del__ method (then only thing you would need would be to override class when constructing an object).
Or you can use super built-in method.
Edit: This won't actually work, and I'm leaving it here largely as a warning to others.
You can monkey patch an individual object. self will not get passed to functions that you monkey patch in this way, but that's easily remedied with functools.partial.
Example:
def monkey_class(x):
x.__class__.__del__ = p
def monkey_object(x):
x.__del__ = functools.partial(p, x)