I heard from one guy that you should not use magic methods directly. and I think in some use cases I would have to use magic methods directly. So experienced devs, should I use python magic methods directly?
I intended to show some benefits of not using magic methods directly:
1- Readability:
Using built-in functions like len() is much more readable than its relevant magic/special method __len__(). Imagine a source code full of only magic methods instead of built-in function... thousands of underscores...
2- Comparison operators:
class C:
def __lt__(self, other):
print('__lt__ called')
class D:
pass
c = C()
d = D()
d > c
d.__gt__(c)
I haven't implemented __gt__ for neither of those classes, but in d > c when Python sees that class D doesn't have __gt__, it checks to see if class C implements __lt__. It does, so we get '__lt__ called' in output which isn't the case with d.__gt__(c).
3- Extra checks:
class C:
def __len__(self):
return 'boo'
obj = C()
print(obj.__len__()) # fine
print(len(obj)) # error
or:
class C:
def __str__(self):
return 10
obj = C()
print(obj.__str__()) # fine
print(str(obj)) # error
As you see, when Python calls that magic methods implicitly, it does some extra checks as well.
4- This is the least important but using let's say len() on built-in data types such as str gives a little bit of speed as compared to __len__():
from timeit import timeit
string = 'abcdefghijklmn'
print(timeit("len(string)", globals=globals(), number=10_000_000))
print(timeit("string.__len__()", globals=globals(), number=10_000_000))
output:
0.5442426
0.8312854999999999
It's because of the lookup process(__len__ in the namespace), If you create a bound method before timing, it's gonna be faster.
bound_method = string.__len__
print(timeit("bound_method()", globals=globals(), number=10_000_000))
I'm not a senior developer, but my experience says that you shouldn't call magic methods directly.
Magic methods should be used to override a behavior on your object. For example, if you want to define how does your object is built, you override __init__. Afterwards when you want to initialize it, you use MyNewObject() instead of MyNewObject.__init__().
For me, I tend to appreciate the answer given by Alex Martelli here:
When you see a call to the len built-in, you're sure that, if the program continues after that rather than raising an exception, the call has returned an integer, non-negative, and less than 2**31 -- when you see a call to xxx.__len__(), you have no certainty (except that the code's author is either unfamiliar with Python or up to no good;-).
If you want to know more about Python's magic methods, I strongly recommend taking a look on this documentation made by Rafe Kettler: https://rszalski.github.io/magicmethods/
No you shouldn't.
it's ok to be used in quick code problems like in hackerrank but not in production code. when I asked this question I used them as first class functions. what I mean is, I used xlen = x.__mod__ instead of xlen = lamda y: x % y which was more convenient. it's ok to use these kinda snippets in simple programs but not in any other case.
Related
If someone writes a class in python, and fails to specify their own __repr__() method, then a default one is provided for them. However, suppose we want to write a function which has the same, or similar, behavior to the default __repr__(). However, we want this function to have the behavior of the default __repr__() method even if the actual __repr__() for the class was overloaded. That is, suppose we want to write a function which has the same behavior as a default __repr__() regardless of whether someone overloaded the __repr__() method or not. How might we do it?
class DemoClass:
def __init__(self):
self.var = 4
def __repr__(self):
return str(self.var)
def true_repr(x):
# [magic happens here]
s = "I'm not implemented yet"
return s
obj = DemoClass()
print(obj.__repr__())
print(true_repr(obj))
Desired Output:
print(obj.__repr__()) prints 4, but print(true_repr(obj)) prints something like:
<__main__.DemoClass object at 0x0000000009F26588>
You can use object.__repr__(obj). This works because the default repr behavior is defined in object.__repr__.
Note, the best answer is probably just to use object.__repr__ directly, as the others have pointed out. But one could implement that same functionality roughly as:
>>> def true_repr(x):
... type_ = type(x)
... module = type_.__module__
... qualname = type_.__qualname__
... return f"<{module}.{qualname} object at {hex(id(x))}>"
...
So....
>>> A()
hahahahaha
>>> true_repr(A())
'<__main__.A object at 0x106549208>'
>>>
Typically we can use object.__repr__ for that, but this will to the "object repr for every item, so:
>>> object.__repr__(4)
'<int object at 0xa6dd20>'
Since an int is an object, but with the __repr__ overriden.
If you want to go up one level of overwriting, we can use super(..):
>>> super(type(4), 4).__repr__() # going up one level
'<int object at 0xa6dd20>'
For an int that thus again means that we will print <int object at ...>, but if we would for instance subclass the int, then it would use the __repr__ of int again, like:
class special_int(int):
def __repr__(self):
return 'Special int'
Then it will look like:
>>> s = special_int(4)
>>> super(type(s), s).__repr__()
'4'
What we here do is creating a proxy object with super(..). Super will walk the method resolution order (MRO) of the object and will try to find the first function (from a superclass of s) that has overriden the function. If we use single inheritance, that is the closest parent that overrides the function, but if it there is some multiple inheritance involved, then this is more tricky. We thus select the __repr__ of that parent, and call that function.
This is also a rather weird application of super since usually the class (here type(s)) is a fixed one, and does not depend on the type of s itself, since otherwise multiple such super(..) calls would result in an infinite loop.
But usually it is a bad idea to break overriding anyway. The reason a programmer overrides a function is to change the behavior. Not respecting this can of course sometimes result into some useful functions, but frequently it will result in the fact that the code contracts are no longer satisfied. For example if a programmer overrides __eq__, he/she will also override __hash__, if you use the hash of another class, and the real __eq__, then things will start breaking.
Calling magic function directly is also frequently seen as an antipattern, so you better avoid that as well.
A Python class's __call__ method lets us specify how a class member should be behave as a function. Can we do the "opposite", i.e. specify how a class member should behave as an argument to an arbitrary other function?
As a simple example, suppose I have a ListWrapper class that wraps lists, and when I call a function f on a member of this class, I want f to be mapped over the wrapped list. For instance:
x = WrappedList([1, 2, 3])
print(x + 1) # should be WrappedList([2, 3, 4])
d = {1: "a", 2: "b", 3:"c"}
print(d[x]) # should be WrappedList(["a", "b", "c"])
Calling the hypothetical __call__ analogue I'm looking for __arg__, we could imagine something like this:
class WrappedList(object):
def __init__(self, to_wrap):
self.wrapped = to_wrap
def __arg__(self, func):
return WrappedList(map(func, self.wrapped))
Now, I know that (1) __arg__ doesn't exist in this form, and (2) it's easy to get the behavior in this simple example without any tricks. But is there a way to approximate the behavior I'm looking for in the general case?
You can't do this in general.*
You can do something equivalent for most of the builtin operators (like your + example), and a handful of builtin functions (like abs). They're implemented by calling special methods on the operands, as described in the reference docs.
Of course that means writing a whole bunch of special methods for each of your types—but it wouldn't be too hard to write a base class (or decorator or metaclass, if that doesn't fit your design) that implements all those special methods in one place, by calling the subclass's __arg__ and then doing the default thing:
class ArgyBase:
def __add__(self, other):
return self.__arg__() + other
def __radd__(self, other):
return other + self.__arg__()
# ... and so on
And if you want to extend that to a whole suite of functions that you create yourself, you can give them all similar special-method protocols similar to the builtin ones, and expand your base class to cover them. Or you can just short-circuit that and use the __arg__ protocol directly in those functions. To avoid lots of repetition, I'd use a decorator for that.
def argify(func):
def _arg(arg):
try:
return arg.__arg__()
except AttributeError:
return arg
#functools.wraps(func)
def wrapper(*args, **kwargs):
args = map(_arg, args)
kwargs = {kw: _arg(arg) for arg in args}
return func(*args, **kwargs)
return wrapper
#argify
def spam(a, b):
return a + 2 * b
And if you really want to, you can go around wrapping other people's functions:
sin = argify(math.sin)
… or even monkeypatching their modules:
requests.get = argify(requests.get)
… or monkeypatching a whole module dynamically a la early versions of gevent, but I'm not going to even show that, because at this point we're getting into don't-do-this-for-multiple-reasons territory.
You mentioned in a comment that you'd like to do this to a bunch of someone else's functions without having to specify them in advance, if possible. Does that mean every function that ever gets constructed in any module you import? Well, you can even do that if you're willing to create an import hook, but that seems like an even worse idea. Explaining how to write an import hook and either AST-patch each function creation node or insert wrappers around the bytecode or the like is way too much to get into here, but if your research abilities exceed your common sense, you can figure it out. :)
As a side note, if I were doing this, I wouldn't call the method __arg__, I'd call it either arg or _arg. Besides being reserved for future use by the language, the dunder-method style implies things that aren't true here (special-method lookup instead of a normal call, you can search for it in the docs, etc.).
* There are languages where you can, such as C++, where a combination of implicit casting and typed variables instead of typed values means you can get a method called on your objects just by giving them an odd type with an implicit conversion operator to the expected type.
Base on my understanding, magic methods such as __str__ , __next__, __setattr__ are built-in features in Python. They will automatically called when a instance object is created. It also plays a role of overridden. What else some important features of magic method do I omit or ignore?
"magic" methods in python do specific things in specific contexts.
For example, to "override" the addition operator (+), you'd define a __add__ method. subtraction is __sub__, etc.
Other methods are called during object creation (__new__, __init__). Other methods are used with specific language constructs (__enter__, __exit__ and you might argue __init__ and __next__).
Really, there's nothing special about magic methods other than they are guaranteed to be called by the language at specific times. As the programmer, you're given the power to hook into structure and change the way an object behaves in those circumstances.
For a near complete summary, have a look at the python data model.
There is a lot you can do with magic methods and since it can be hard finding the right way to get started, I'd like to give you some inspiration on what I'm using a lot.
While you're probably already using some of them (like __init__), I would start learning on the operator specific magic methods, which helped me a lot optimising classes and how I use them. The magic method __mul__ for example allows you to describe what should happen to your class in case it's getting called by the multiplication operator. In the following example you can see, that the interpreter first looks for a multiplicand's __mul__ method and if this doesn't exist (like in the second example) it tries to call the multiplier's __rmul__ method.
Example 1:
class a:
def __mul__(self, other):
print("__mul__ a")
def __rmul__(self, other):
print("__rmul__ a")
class b:
def __mul__(self, other):
print("__mul__ b")
def __rmul__(self, other):
print("__rmul__ b")
ia = a()
ib = b()
ia * ib
# prints __mul__ a
Example 2:
class a:
pass
class b:
def __mul__(self, other):
print("__mul__ b")
def __rmul__(self, other):
print("__rmul__ b")
ia = a()
ib = b()
ia * ib
# prints __rmul__ b
Any other operator works corresponding to this example. I hope that helps you getting started enhancing your magic method skills.
hint:
To make your classes comparable you can use __cmp__.
The method is called with the two classes which are compared.
Return a positive value if the first class is bigger.
Return a nagative value if the second class is bigger.
Return zero if they have the same size.
You don't have to use the magic methods for every possibility.
Is there a way in Python to either ``type'' functions or for functions to
inherit test suites? I am doing some work evaluating several different
implementations of different functions with various criteria (for
example I may evaluate different sort functions based on speed for the
array size and memory requirements). And I
want to be able to automate the testing of the functions. So I would
like a way to identify a function as being an implementation of
a certain operator so that the test suite can just grab all functions
that are implementation of that operator and run them through the
tests.
My initial thought was to use classes and subclasses, but the class
syntax is a bit finnicky for this purpose because I would first have
to create an instance of the class before I could call it as a
function... that is unless there is a way to allow init to return
a type other than None.
Can metaclasses or objects be used in this fashion?
Functions are first class objects in Python and you can treat them as such, e.g. add some metadata via setattr:
>>> def function1(a):
... return 1
...
>>> type(function1)
<type 'function'>
>>> setattr(function1, 'mytype', 'F1')
>>> function1.mytype
'F1'
Or the same using a simple parametrized decorator:
def mytype(t):
def decorator(f):
f.mytype = t
return f
return decorator
#mytype('F2')
def function2(a, b, c):
return 2
I apologize, as I cannot comment but just to clarify you stated " I would first have to create an instance of the class before I could call it as a function... " Does this not accomplish what you are trying to do?
class functionManager:
def __init__(testFunction1 = importedTests.testFunction1):
self.testFunction1() = testFunction1
functionManager = functionManager()
Then just include the line from functionManagerFile import functionManager wherever you wanna use it?
Consider these two classes:
class Test(int):
difference = property(lambda self: self.__sub__)
class Test2(int):
difference=lambda self: self.__sub__
Is there any difference between these two classes? New: If so, what is the purpose of using the property to store a lambda function that returns another function?
Update: Changed the question to what I should have asked in the first place. Sorry. Even though I can now know the solution from the answers, it would be unfair for me to do a self answer in these circumstances. (without leaving the answer for a few days at least).
Update 2: Sorry, I wasn't clear enough again. The question was about the particular construction, not properties in general.
For Test1, you could use .difference - for Test2, you'd need to use .difference() instead.
As for why you might use it, a potential use would be to replace something that was previously directly stored as a property with a dynamic calculation instead.
For instance, if you used to store property obj.a, but then you expanded your implementation so that it knew instead properties obj.b and obj.c that could be used to calculate a, but could also be used to calculate different things. If you still wanted to provide backwards-compat with things that used the previous object form, you could implement obj.a as a property() that calculated a based on b and c and it'd behave to those older code fragments as it previously did, with no other code modification needed.
Edit: Ah, I see. You are asking why anybody would do exactly the code above. It's not, in fact a question about why to make a lambda or a property at all, it's not a question of the differences between the two examples, and not even why you want to make a property out of a lambda.
Your question is "Why would anybody make a property of a lambda that just returns self.__sub__".
And the answer is: One wouldn't.
Let's assume somebody wants to do this:
>>> foo = MyInt(8)
>>> print foo.difference(7)
1
So he tries to accomplish it by this class:
class MyInt(int):
def difference(self, i):
return self - i
But that's two lines, and since he is a Ruby programmer and believes that good code is code that has few lines of code, he changes it to:
class MyInt(int):
difference = int.__sub__
To save one line of code. But apparently, things are still too easy. He learned in Ruby that a problem is not properly solved unless you use anonymous code blocks, so he will try to use Pythons nearest equivalent, lambdas, for absolutely no reason:
class MyInt(int):
difference=lambda self, i: self - i
All these works. But things are still WAY to uncomplicated, so instead he decides to make things more complex, by not doing the calculation, but returning the sub method:
class MyInt(int):
difference=lambda self: self.__sub__
Ah, but that doesn't work, because he needs to call difference to get the sub-method:
>>> foo = MyInt(8)
>>> print foo.difference()(7)
1
So he makes it a property:
class MyInt(int):
difference=property(lambda self: self.__sub__)
There. Now he has found the maximum complexity to solve a non-problem.
But normal people wouldn't do any of these, but do:
>>> foo = 8
>>> print foo - 7
1
People have given there opinion without analyzing it, it can be better solved by python itself, below is the code to check the difference
import difflib
from pprint import pprint
s1 = """
class Test(int):
difference=property(lambda self: self.__sub__)
"""
s2 = """
class Test(int):
difference=lambda self: self.__sub__
"""
d = difflib.Differ()
print "and the difference is..."
for c in d.compare(s1, s2):
if c[0] in '+-': print c[1:],
and as expected it says
and the difference is...
p r o p e r t y ( )
Yes, in one case difference is a property. If you are asking what a property is, you can see it as a method that gets automatically called.
Yes, in one case difference is a property
Purpose of property can be
1.
To provide get/set hooks while accessing an attribute
e.g. if you used to have class with attribute a, later on you want to do something else when it is set, you can convert that attribute to property without affecting the interface or how users use your class. So in the example below class A and B are exactly same for a user but internally in B you can do many things in get/setX
class A(object):
def __init__(self):
self.x = 0
a = A()
a.x = 1
class B(object):
def __init__(self):
self.x = 0
def getX(self): return self._x
def setX(self, x): self._x = x
x = property(getX, setX)
b = B()
B.x = 1
2.
As implied in 1, property is a better alternative to get/set calls, so instead of getX, setX user uses less verbose self.x and self.x = 1, though personally I never make a property just for getting or setting a attribute, if need arises it can be done later on as shown in #1/
as far as difference in concerned, property provide you with get/set/del for an atribute, but in the example you have given a method(lambda or proper function) can only be used to do one of get/set or del, so you will need three such lambdas differenceSet, differenceGet, differenceDel