Object that has neither attributes nor methods in python - python

'Every thing in python is an object'
So, should all objects have to have attributes and methods ?
I read below statemts in tutorial site, could you give example of pre-defined object in python that has neither attributes nor methods ?
Some objects have neither attributes nor methods

Everything in Python is indeed an object, this is true. Even classes themselves are considered to be objects and they're indeed the product of the builtin class typewhich is not surprisingly an object too. But objects will almost certainly inherit attributes including methods or data in most circumstances.
So, should all objects have to have attributes and methods ?
Not necessarily all objects have their own attributes. For instance, an object can inherit the attribute from its class and its class's superclasses, but that attribute or method doesn't necessarily live within the instance's namespace dictionary. In fact, instances' namespaces could be empty like the following:
class Foo:
pass
a = A()
print(a.__dict__)
a here doesn't have any attributes aside from those inherited from its class so if you check its namespace through the builtin attribute __dict__ you'll find the namespace to be an empty dictionary. But you might wonder isn't a.__dict__ an attribute of a? Make a distinction between class-level attributes--attributes inherited from the class or its superclasses and instance-attributes--attributes that belong to the instance and usually live in its namespace __dict__.
Could you give example of pre-defined object in python that has neither attributes nor methods ?
If you meant by predefined object, a builtin object, I couldn't imagine such scenario. Again, even if there are no attributes at the object itself, there would be attributes inherited from its class or the class's superclasses if there's any superclass in most cases. Probably and I'm guessing here, the tutorial is asking you to create class that assigns no attributes to its objects, just like the code I included above.
And this already answers your question better: Is everything an object in python like ruby?

There's a hackish way to emulate a Python object with no attributes.
class NoAttr(object):
def __getattribute__(self, attr):
raise AttributeError("no attribute: %s" % attr)
def __setattr__(self, attr, value):
raise AttributeError("can't set attribute: %s" % attr)
def __delattr__(self, attr):
raise AttributeError("no attribute: %s" % attr)
a = NoAttr()
This instance a, for all intents and purposes, in pure Python, behaves like an object with no attributes (you can try hasattr on it).
There may be a low-level way to do this in a C extension by implementing a type in C that pathologically stops Python's object implementation from working. Anyway the margin here is too small for writing one.
A pre-defined object with no attributes would defeat the purpose of pre-defining it.

Related

What is the purpose of metaclass methods being defined on classes but not instances?

A method defined on a metaclass is accessible by classes that use the metaclass. However, the method will not be accessible on the instances of these classes.
My first guess was that metaclass methods would not be accessible on either classes or instances.
My second guess was that metaclass methods would be accessible on both classes and instances.
I find it surprising that metaclass methods are instead accessible on classes, but not on instances.
What is the purpose of this behavior? Is there any case where I can use this to an advantage? If there is no intended purpose, how does the implementation work such that this is the resulting behavior?
class Meta(type):
def __new__(mcs, name, bases, dct):
mcs.handle_foo(dct)
return type.__new__(mcs, name, bases, dct)
#classmethod
def handle_foo(mcs, dct):
"""
The sole purpose of this method is to encapsulate some logic
instead of writing code directly in __new__,
and also that subclasses of the metaclass can override this
"""
dct['foo'] = 1
class Meta2(Meta):
#classmethod
def handle_foo(mcs, dct):
"""Example of Metaclass' subclass overriding"""
dct['foo'] = 10000
class A(metaclass=Meta):
pass
class B(metaclass=Meta2):
pass
assert A.foo == 1
assert B.foo == 10000
assert hasattr(A, 'handle_foo')
assert hasattr(B, 'handle_foo')
# What is the purpose or reason of this method being accessible on A and B?
# If there is no purpose, what about the implementation explains why it is accessible here?
instance = A()
assert not hasattr(instance, 'handle_foo')
# Why is this method not accessible on the instance, when it is on the class?
# What is the purpose or reason for this method not being accessible on the instance?
What is the purpose of this behavior? What use case is this behavior intended to support? I am interested in a direct quote from the documentation, if one exists.
If there is no purpose, and this is simply a byproduct of the implementation, why does the implementation result in this behavior? I.e., how are metaclasses implemented such that the methods defined on the metaclass are also defined accessible on classes that use the metaclass, but not the instantiated objects of these classes?
There is only one practical implication of this that I have found is the following: Pycharm will include these metaclass functions in the code completion box when you start typing A. (i.e., the class). I don't want users of my framework to see this. One way to mitigate this as by renaming these methods as private methods (e.g. _handle_foo), but still I would rather these methods not show up in code completion at all. Using a dunder naming convention (__) won't work, as subclasses of the metaclass will not be able to override the methods.
(I've edited this post extensively due to the thoughtful feedback from Miyagi and Serge, in order to make it more clear as to why I am defining methods on the metaclass in the first place: simply in order to encapsulate some behavior instead of putting all the code in __new__, and to allow those methods to be overridden by subclasses of the metaclass)
Let us first look at this in a non-meta situation: We define a function inside a class and access it via the instance.
>>> class Foo:
... def bar(self): ...
...
>>> Foo.bar
<function __main__.Foo.bar(self)>
>>> foo = Foo()
>>> foo.bar
<bound method Foo.bar of <__main__.Foo object at 0x10dc75790>>
Of note is that the two "attributes" are not the same kind: The class' attribute is the very thing we put into it, but the instance's "attribute" is a dynamically created thing.
Likewise, methods defined on a metaclass are not inherited by the class, they are (dynamically) bound to its classes.
>>> Meta.meta_method # direct access to "class" attribute
<function __main__.Meta.meta_method(cls)>
>>> Foo.meta_method # instance access to "class" attribute
<bound method Meta.meta_method of <class '__main__.Foo'>>
This is the exact same mechanism – because a class is "just" a metaclass' instance.
It should be obvious at this point that the attributes defined on the metaclass and dynamically bound to the class are not the same thing, and there is no reason for them to behave the same. Whether lookup of attributes on an instance picks up metaclass-methods from their dynamic form on the class, directly from the metaclass or not at all is a judgement call.
Python's data model defines that default lookup only takes into account the instance and the instance's type. The instance's type's type is explicitly excluded.
Invoking Descriptors
[…]
The default behavior for attribute access is to get, set, or delete the attribute from an object’s dictionary. For instance, a.x has a lookup chain starting with a.__dict__['x'], then type(a).__dict__['x'], and continuing through the base classes of type(a) excluding metaclasses.
There is no rationale given for this approach. However, it is sufficient to replicate common instantiation+inheritance behaviour of other languages. At the same time, it avoids arbitrarily deep lookups and the issue that type is a recursive metaclass.
Notably, since a metaclass is in full control of how a class behaves, it can directly define methods in the class or redefine attribute access to circumvent the default behaviour.

Where is the __bases__ attribute defined?

​‫I define a python class in python interpreter
class A:
pass
I get base class of A using A.__bases__, it shows
(object,)
but when I enter dir(A), the output don't contain __bases__ attribute, then I try dir(object), __bases__ is not found either, where does the __bases__ come from?
The __bases__ attribute in a class is implemented by a descriptor in the metaclass, type. You have to be a little careful though, since type, as one of the building blocks of the Python object model, is an instance of itself, and so type.__bases__ doesn't do what you would want for introspection.
Try this:
descriptor = type.__dict__['__bases__']
print(descriptor, type(descriptor))
You can reproduce the same kind of thing with your own descriptors:
class MyMeta(type):
#property # a property is a descriptor
def foo(cls):
return "foo"
class MyClass(metaclass=MyMeta):
pass
Now if you access MyClass.foo you'll get the string foo. But you won't see foo in the variables defined in MyClass (if you check with vars or dir). Nor can you access it through an instance of MyClass (my_obj = MyClass(); my_obj.foo raises an AttributeError).
it is a special attribute akin to __name__ or __dict__. While the result of function dir actually depends on the implementation of __dir__ function.
You might want to look it on the doc here https://docs.python.org/3/reference/datamodel.html

Difference between __getattribute__ and obj.__dict__['x'] in python?

I understand that in python, whenever you access a class/instance variable, it will call __getattribute__ method to get the result. However I can also use obj.__dict__['x'] directly, and get what I want.
I am a little confused about what is the difference?
Also when I use getattr(obj, name), is it calling __getattribute__ or obj.__dict__[name] internally?
Thanks in advance.
__getattribute__() method is for lower level attribute processing.
Default implementation tries to find the name
in the internal __dict__ (or __slots__). If the attribute is not found, it calls __getattr__().
UPDATE (as in the comment):
They are different ways for finding attributes in the Python data model. They are internal methods designed to fallback properly in any possible situation. A clue: "The machinery is in object.__getattribute__() which transforms b.x into type(b).__dict__['x'].__get__(b, type(b))." from docs.python.org/3/howto/descriptor.html
Attributes in __dict__ are only a subset of all attributes that an object has.
Consider this class:
class C:
ac = "+AC+"
def __init__(self):
self.ab = "+AB+"
def show(self):
pass
An instance ic = C() of this class will have attributes 'ab', 'ac' and 'show' (and few others). The __gettattribute__ will find them all, but only the 'ab' is stored in the ic.__dict__. The other two can be found in C.__dict__.
Not every python object has a dictionary where it's attributes are stored, there are slots, properties and attributes, that are calculated, wenn needed. You can also overwrite __getattribute__ and __getattr__. Attribute access is more complicated, than a simple dictionary lookup. So the normal way to access attribute is by
obj.x
wenn you have a variable with attribute name:
getattr(obj, name)
Normally you shouldn't use the internal attributes __xxx__.

Python's "class is a dict" theory and consequences of adding class variables

Just out of curiosity I'm playing with the __dict__ attribute on Python classes.
I read somewhere that a "class" in python is kind of a dict, and calling __dict__ on a class instance translate the instance into a dict... and I thought "cool! I can use this!"
But this lead me to doubts about the correctness and security of these actions.
For example, if I do:
class fooclass(object):
def __init__(self, arg):
self.arg1 = arg
p = fooclass('value1')
print(p.__dict__)
p.__dict__['arg2'] = 'value2'
print(p.__dict__)
print(p.arg2)
I have this:
>>>{'arg1': 'value1'}
>>>{'arg1': 'value1', 'arg2': 'value2'}
>>>value2
and that's fine, but:
Does the fooclass still have 1 attribute? How can I be sure?
Is it secure to add attributes that way?
Have you ever had cases where this came in handy?
I see that I can't do fooclass.__dict__['arg2'] = 'value2'.. so why this difference between a class and an instance?
You are altering the attributes of the instance. Adding and removing from the __dict__ is exactly what happens for most custom class instances.
The exception is when you have a class that uses __slots__; instances of such a class do not have a __dict__ attribute as attributes are stored in a different way.
Python is a language for consenting adults; there is no protection against adding attributes to instances with a __dict__ in any case, so adding them to the dictionary or by using setattr() makes no difference.
Accessing the __dict__ is helpful when you want to use existing dict methods to access or alter attributes; you can use dict.update() for example:
def Namespace(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
It is also helpful when trying to read an attribute on the instance for which there is a corresponding data descriptor on the class; to bypass the latter you can access the __dict__ on the instance to get to the attribute. You can also use this to test for attributes on the instance and ignore anything the class or the base classes might define.
As for fooclass.__dict__; you are confusing the class with the instances of a class. They are separate objects. A class is a different type of object, and ClassObj.__dict__ is just a proxy object; you can still use setattr() on the class to add arbitrary attributes:
setattr(fooclass, 'arg2', 'value2')
or set attributes directly:
fooclass.arg2 = value2
A class needs to have a __dict__ proxy, because the actual __dict__ attribute is a descriptor object to provide that same attribute on instances. See What is the __dict__.__dict__ attribute of a Python class?
Details about how Python objects work are documented in the Datamodel reference documentation.

Python: Can a class forbid clients setting new attributes?

I just spent too long on a bug like the following:
>>> class Odp():
def __init__(self):
self.foo = "bar"
>>> o = Odp()
>>> o.raw_foo = 3 # oops - meant o.foo
I have a class with an attribute. I was trying to set it, and wondering why it had no effect. Then, I went back to the original class definition, and saw that the attribute was named something slightly different. Thus, I was creating/setting a new attribute instead of the one meant to.
First off, isn't this exactly the type of error that statically-typed languages are supposed to prevent? In this case, what is the advantage of dynamic typing?
Secondly, is there a way I could have forbidden this when defining Odp, and thus saved myself the trouble?
You can implement a __setattr__ method for the purpose -- that's much more robust than the __slots__ which is often misused for the purpose (for example, __slots__ is automatically "lost" when the class is inherited from, while __setattr__ survives unless explicitly overridden).
def __setattr__(self, name, value):
if hasattr(self, name):
object.__setattr__(self, name, value)
else:
raise TypeError('Cannot set name %r on object of type %s' % (
name, self.__class__.__name__))
You'll have to make sure the hasattr succeeds for the names you do want to be able to set, for example by setting the attributes at a class level or by using object.__setattr__ in your __init__ method rather than direct attribute assignment. (To forbid setting attributes on a class rather than its instances you'll have to define a custom metaclass with a similar special method).

Categories