Here is a snippet of example from Mark Lutz's book "Learning Python". I found it difficult to understand as to how name accesses are translated into getattr() calls in the metaclass:
>>> class A(type):
def __getattr__(cls, name):
return getattr(cls.data, name)
>>> class B(metaclass=A):
data = 'spam'
>>> B.upper()
'SPAM'
>>> B.upper
<built-in method upper of str object at 0x029E7420>
>>> B.__getattr__
<bound method A.__getattr__ of <class '__main__.B'>>
>>> B.data = [1, 2, 3]
>>> B.append(4)
>>> B.data
[1, 2, 3, 4]
>>> B.__getitem__(0)
1
>>> B[0]
TypeError: 'A' object does not support indexing
I have the following questions:
how does B.upper() yield 'SPAM'? Is it because B.upper() => A.__getattr__(B, upper()) => getattr(B.data, upper())? but a call like getattr('spam', upper()) gives error "NameError: name 'upper' is not defined"
what path does B.upper go to yiled <built-in method upper of str object at 0x029E7420>. does it go through getattr too, what is the true value of the arguments?
Does B.append(4) go through A.__getattr__(cls, name)? if it does, what is the true values of the arguments in getattr(cls.data, name) in this case?
how does B.__getitem__(0) yield 1? what is the true values of the arguments in getattr(cls.data, name) in this case?
B.upper() first looks up B.upper and then calls it with no arguments. B.upper is looked up by trying several options in a certain order, eventually trying type(B).__getattr__(B, 'upper'), which in this case is A.__getattr__(B, 'upper'), which returns 'spam'.upper.
As mentioned above, B.upper goes through several options, in this case reaching type(B).__getattr__(B, 'upper') which is A.__getattr__(B, 'upper').
Yes, in this case, B.append will reach A.__getattr__(B, 'append') which will return B.data.append.
B.__getitem__(0) will in this case look up B.__getitem__ and find it via A.__getattr__(B, '__getitem__') which will return B.data.__getitem__.
Also, note the final example, B[0], doesn't work because the B class doesn't directly define a __getitem__ method. This is because "special" methods, such as __getitem__, are looked up differently when used via their special syntax, such as B[0] in this case.
First you don't need to add the usual confusion of a meta class to get this behaviour, you can just as easily use a regular class and an instance to use as an example:
class A():
def __getattr__(cls, name):
return getattr(cls.data, name)
B = A()
B.data = "spam"
>>> B.data
'spam'
>>> B.upper
<built-in method upper of str object at 0x1057da730>
>>> B.upper()
'SPAM'
>>> B.__getitem__
<method-wrapper '__getitem__' of str object at 0x1057da730>
>>> B.__getitem__(0)
's'
>>> B[0]
Traceback (most recent call last):
File "<pyshell#135>", line 1, in <module>
B[0]
TypeError: 'A' object does not support indexing
Next keep in mind that B[0] does not look up B.__getitem__ using your special method but rather tries to access it directly on type(B) which does not have a __getitem__ so the indexing fails.
how does B.upper() yield 'SPAM'? Is it because B.upper() =>
A.getattr(B, upper()) => getattr(B.data, upper())? but a call
like getattr('spam', upper()) gives error "NameError: name 'upper'
is not defined"
getattr('spam', upper()) does not make any sense, the name of the attribute is always a string, so using B.upper (no calling yet) would be equivelent to getattr(B, "upper") then you call the method.
what path does B.upper go to yiled . does it go through getattr too, what is the
true value of the arguments?
Is there any reason you are not just adding a print statement to check?
class A():
def __getattr__(cls, name):
print("calling __getattr__ for this name: %r"%name)
return getattr(cls.data, name)
>>> B = A()
>>> B.data = "spam"
>>> B.upper
calling __getattr__ for this name: 'upper'
<built-in method upper of str object at 0x1058037a0>
both 3 and 4 are answered by adding this print statement:
>>> B.data = [1,2,3,4]
>>> B.append(5)
calling __getattr__ for this name: 'append'
>>> B.__getitem__(0)
calling __getattr__ for this name: '__getitem__'
1
Related
I would like to know, if there is a way to overload operators in Python in runtime. For instance:
class A:
pass
a = A()
a.__str__ = lambda self: "noice"
print(str(a))
The desired output is "noice", but the given code uses object's implementation of the str function instead, yielding something along the lines: <__main__.A object at 0x000001CAB2051490>.
Why doesn't the code use my overriden implementation of the function overload?
Python version used is 3.9.2.
When you call str(a), it resolves to the equivalent of a.__class__.__str__(a), not a.__str__().
>>> A.__str__ = lambda self: "noice"
>>> str(a)
'noice'
You have to assign that function to the class, not an instance of the class.
>>> class A:
... pass
...
>>> a = A()
>>> a.__str__ = lambda x: 'hi'
>>> print(a)
<__main__.A object at 0x000000A4D16C1D30>
>>> A.__str__ = lambda x: 'hi'
>>> print(a)
hi
Consider the following case:
>>> "a".capitalize.__call__
<method-wrapper '__call__' of builtin_function_or_method object at 0x1022e70d8>
>>> "a".capitalize.__call__.__call__
<method-wrapper '__call__' of method-wrapper object at 0x1022e2b70>
>>> id("a".capitalize.__call__)
4331547448
>>> id("a".capitalize.__call__.__call__)
4331547504
>>> id("a".capitalize.__call__) == id("a".capitalize.__call__.__call__)
False
How can I establish at runtime that the __call__ refers to the same base symbol in both cases ?
Edit:
It is possible that expecting something like this to exist for __call__ in the first place is unreasonable because __call__ might not have a base symbol that is not bound to any object - in which case what is the best way to detect that other than keeping a list of special names (how can such a list be built authoritatively ?) ?
Won't work for builtin_function_or_method as they have no __func__.
If you're only working with your classes you can compare __func__:
>>> class Foo:
... def bar(self):
... pass
...
>>>
>>> a = Foo()
>>> b = Foo()
>>> a.bar.__func__
<function Foo.bar at 0x7f6f103e5ae8>
>>> a.bar.__func__ is b.bar.__func__
True
But I'd say it's better to do it the other way around: instead of trying to get the function from the instance, get up to the type first:
type(a).bar is type(a).bar
Which should work even for bltns:
>>> type("").capitalize is type("a").capitalize
True
And for hierarchies:
>>> class A:
... def foo(self):...
...
>>> class B(A):...
...
>>> a = A()
>>> b = B()
>>> type(a).foo is type(b).foo
True
I am wondering what kind of difference exists between class(dict) and class(str)
Here is my code
class MyDict3(str):
def __init__(self):
self.a = None
class MyDict(dict):
def __init__(self):
self.a = None
These classes are what I made for clarification
and then I type below
>>> mydict['a'] = 1
>>> mydict
{'a': 1}
>>> mydict3['a'] = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'MyDict3' object does not support item assignment
Why does my mydict3['a'] make an error?
The difference that I made is only MyDict(dict) and MyDict(str)
As far as I know, the object that I specified(dict, str) is just nothing but constructer like c++,java
Please give me a clear answer on that.
Why does my mydict3['a'] make an error? The difference that I made is only MyDict(dict) and MyDict(str) As far as I know, the object that I specified(dict, str) is just nothing but constructer like c++,java
I believe that you're doing a confusion here, thinking that a class attribute and an item are the same thing, like the following javascript code:
> foo = {'a': 42};
{ a: 42 }
> foo.a
42
> foo['a']
42
> foo.a === foo['a']
true
But in python foo.a and foo['a'] are two different mechanisms. When you call foo.a you're actually accessing the a attribute of a class, which is defined through the class definition:
class Foo:
def __init__(self):
self.a = 42 # declaring and defining the a attribute
so then you can access it using:
>>> foo = Foo()
>>> print(foo.a)
42
But to have foo['a'] working, you have to use the indexing mechanism, which is usually used for dicts or lists:
>>> foo = {'a': 42}
>>> foo['a']
42
That mechanism is being implemented by the __getitem__ method of your class, so you can overload it if you want:
class Foo:
def __getitem__(self, val):
if val == 'a':
return 42
raise KeyError('Unknown key') # when the key is unknown, you raise a key error exception
>>> foo = Foo()
>>> foo['a']
42
>>> foo['b']
KeyError: 'Unknown key'
So, the dict class is a class that implements __getitem__ (and __setitem__ and many others), in order to provide you a proper mapping mechanism called a dictionary. There keys can be any immutable objects, and values anything. For a list, it shall be only integers (which are the positions in the list).
That being said, let's answer your question:
Why does my mydict3['a'] make an error?
obviously it's because you defined mydict3 as being an implementation of a string, which has a special implementation for the __getitem__ method: it's giving you a character at the parameter position like if the list was a list of character (like in C).
So when you're trying to index mydict3 with 'a', python just tells you that what you're asking makes no sense!
So in the end, when you say:
The difference that I made is only MyDict(dict) and MyDict(str)
it's actually a very big difference! A dict and an str do not have the same interface, and thus what you want to do cannot work!
P.S.: Actually, nothing is black or white. The implementation of a class actually is a dict, and you can access all members of a class' instance through the __dict__ member of an instance:
class Foo():
def __init__(self):
self.a = 42
>>> foo = Foo()
>>> foo.__dict__['a']
42
but you shall never directly access the __dict__ instance directly, and use helper functions setattr and getattr:
>>> setattr(foo, 'b', 42)
>>> getattr(foo, 'b')
42
>>> getattr(foo, 'a')
42
This is some advanced python tricks, and they should be use with care. If there's really no other way to do it, then maybe you should use that.
Also, there exists a special class that transform dict items as class members, it's the namedtuple:
>>> from collections import namedtuple
>>> d = {'a': 42, 'b': 69}
>>> SpecialDict = namedtuple('SpecialDict', d.keys())
>>> foo = SpecialDict(**d)
>>> d.a
42
>>> d.b
69
HTH
# python3
def foo(a):
class A:
def say(self):
print(a)
return A
A = foo(1)
'__closure__' in dir(A.say) # True
a = A()
a.say.__closure__ # it returns the closure tuple
'__closure__' in dir(a.say) # False
'__closure__' in dir(a.say.__class__) # False
'__closure__' in dir(a.say.__class__.__class__) # False
In Python3, A.say is a function, and I know it has__closure__ attribute.
__closure__ not in dir(a.say) or its super class, but a.say.__closure__ returns the closure tuple. It makes me confuse.
Thanks.
I don't know in Python the internal implementation of objects with type instancemethod but I think it is how __getattr__ works in instance method objects.
My guess is when you say a.say.__closure__ it first looks up for __closure__ in dir(a.say) and then fallbacks on dir(a.say.im_func).
>>> a = foo(1)()
>>> print type(a.say)
>>> instancemethod
>>> a.say.im_func.__closure__
>>> (<cell at 0x10a00f980: int object at 0x7fef29e098b8>,)
>>> '__closure__' in dir(a.say.im_func)
>>> True
Code:
import types
class C(object):
pass
c = C()
print(isinstance(c, types.InstanceType))
Output:
False
What correct way to check if object is instance of user-defined class for new-style classes?
UPD:
I want put additional emphasize on if checking if type of object is user-defined. According to docs:
types.InstanceType
The type of instances of user-defined classes.
UPD2:
Alright - not "correct" ways are OK too.
UPD3:
Also noticed that there is no type for set in module types
You can combine the x.__class__ check with the presence (or not) of either '__dict__' in dir(x) or hasattr(x, '__slots__'), as a hacky way to distinguish between both new/old-style class and user/builtin object.
Actually, this exact same suggestions appears in https://stackoverflow.com/a/2654806/1832154
def is_instance_userdefined_and_newclass(inst):
cls = inst.__class__
if hasattr(cls, '__class__'):
return ('__dict__' in dir(cls) or hasattr(cls, '__slots__'))
return False
>>> class A: pass
...
>>> class B(object): pass
...
>>> a = A()
>>> b = B()
>>> is_instance_userdefined_and_newclass(1)
False
>>> is_instance_userdefined_and_newclass(a)
False
>>> is_instance_userdefined_and_newclass(b)
True
I'm not sure about the "correct" way, but one easy way to test it is that instances of old style classes have the type 'instance' instead of their actual class.
So type(x) is x.__class__ or type(x) is not types.InstanceType should both work.
>>> class Old:
... pass
...
>>> class New(object):
... pass
...
>>> x = Old()
>>> y = New()
>>> type(x) is x.__class__
False
>>> type(y) is y.__class__
True
>>> type(x) is types.InstanceType
True
>>> type(y) is types.InstanceType
False
This tells us True if it is.
if issubclass(checkthis, (object)) and 'a' not in vars(__builtins__):print"YES!"
The second argument is a tuple of the classes to be checked.
This is easy to understand and i'm sure it works.
[edit (object) to(object,) thanks Duncan!]
Probably I can go with elimination method - not checking if object is instance of user-defined class explicitly - isinstance(object, RightTypeAliasForThisCase), but checking if object not one of 'basic' types.