python dynamically set non-instance class attributes - python

I am trying to add class attributes dynamically, but not at the instance level. E.g. what I can do manually as:
class Foo(object):
a = 1
b = 2
c = 3
I'd like to be able to do with:
class Foo(object):
dct = {'a' : 1, 'b' : 2, 'c' : 3}
for key, val in dct.items():
<update the Foo namespace here>
I'd like to be able to do this without a call to the class from outside the class (so it's portable), or without additional classes/decorators. Is this possible?

Judging from your example code, you want to do this at the same time you create the class. In this case, assuming you're using CPython, you can use locals().
class Foo(object):
locals().update(a=1, b=2, c=3)
This works because while a class is being defined, locals() refers to the class namespace. It's implementation-specific behavior and may not work in later versions of Python or alternative implementations.
A less dirty-hacky version that uses a class factory is shown below. The basic idea is that your dictionary is converted to a class by way of the type() constructor, and this is then used as the base class for your new class. For convenience of defining attributes with a minimum of syntax, I have used the ** convention to accept the attributes.
def dicty(*bases, **attrs):
if not bases:
bases = (object,)
return type("<from dict>", bases, attrs)
class Foo(dicty(a=1, b=2, c=3)):
pass
# if you already have the dict, use unpacking
dct = dict(a=1, b=2, c=3)
class Foo(dicty(**dct)):
pass
This is really just syntactic sugar for calling type() yourself. This works fine, for instance:
class Foo(type("<none>", (object,), dict(a=1, b=2, c=3))):
pass

Do you mean something like this:
def update(obj, dct):
for key, val in dct.items():
obj.setattr(key, val)
Then just go
update(Foo, {'a': 1, 'b': 2, 'c': 3})
This works, because a class is just an object too ;)
If you want to move everything into the class, then try this:
class Foo(object):
__metaclass__ = lambda t, p, a: return type(t, p, a['dct'])
dct = {'a': 1, 'b': 2, 'c': 3}
This will create a new class, with the members in dct, but all other attributes will not be present - so, you want to alter the last argument to type to include the stuff you want. I found out how to do this here: What is a metaclass in Python?

The accepted answer is a nice approach. However, one downside is you end up with an additional parent object in the MRO inheritance chain that isn't really necessary and might even be confusing:
>>> Foo.__mro__
(<class '__main__.Foo'>, <class '__main__.<from dict>'>, <class 'object'>)
Another approach would be to use a decorator. Like so:
def dicty(**attrs):
def decorator(cls):
vars(cls).update(**attrs)
return cls
return decorator
#dicty(**some_class_attr_namespace)
class Foo():
pass
In this way, you avoid an additional object in the inheritance chain. The #decorator syntax is just a pretty way of saying:
Foo = dicty(a=1, b=2, c=3)(Foo)

Related

Is it possible to pass variables or object's attributes to a function or method?

For example consider a following function and class:
def foo(Atributes,values):
for item,value in zip(Attributes,values):
item=value
class bar:
foo=foo
def __init__(self,a,b):
self.a=a
self.b=b
def update(self,a,b):
foo((self.a,self.b),(a,b))
the above function is just to update the attributes of any objects of class bar. Or more specifically, the function foo is any generalized function to update the attributes of any class other than bar which may have different number of attributes to their objects.
So, is it possible to pass attributes not their value to that generalized function? Or is there any other method to do so: make an generalized function that updates the attributes of any objects of of any class type:
Objects in Python are basically dictionaries. You could do something like this
def foo(obj, attributes, values):
for item, value in zip(attributes,values):
obj.__dict__[item] = value
class Bar:
def __init__(self,a,b):
self.a = a
self.b = b
bar = Bar(1, 2)
foo(bar, ["a", "b"], [3, 4])
assert bar.a == 3
assert bar.b == 4
That being said: Just because you could, does not mean you should. This solution is hacky, confusing and really not necessary. In nearly all cases it would be better to just set a bunch of member variables:
bar.a = 3
bar.b = 4
If the fields themself are dynamic in their naming, it would be best just use a dict directly.
Edit: #MisterMiyagi is absolutely correct. You should use setattr(obj, item, value) instead of my even worse hack.

What really makes an object callable in python [duplicate]

I would like to do the following:
class A(object): pass
a = A()
a.__int__ = lambda self: 3
i = int(a)
Unfortunately, this throws:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: int() argument must be a string or a number, not 'A'
This only seems to work if I assign the "special" method to the class A instead of an instance of it. Is there any recourse?
One way I thought of was:
def __int__(self):
# No infinite loop
if type(self).__int__.im_func != self.__int__.im_func:
return self.__int__()
raise NotImplementedError()
But that looks rather ugly.
Thanks.
Python always looks up special methods on the class, not the instance (except in the old, aka "legacy", kind of classes -- they're deprecated and have gone away in Python 3, because of the quirky semantics that mostly comes from looking up special methods on the instance, so you really don't want to use them, believe me!-).
To make a special class whose instances can have special methods independent from each other, you need to give each instance its own class -- then you can assign special methods on the instance's (individual) class without affecting other instances, and live happily ever after. If you want to make it look like you're assigning to an attribute the instance, while actually assigning to an attribute of the individualized per-instance class, you can get that with a special __setattr__ implementation, of course.
Here's the simple case, with explicit "assign to class" syntax:
>>> class Individualist(object):
... def __init__(self):
... self.__class__ = type('GottaBeMe', (self.__class__, object), {})
...
>>> a = Individualist()
>>> b = Individualist()
>>> a.__class__.__int__ = lambda self: 23
>>> b.__class__.__int__ = lambda self: 42
>>> int(a)
23
>>> int(b)
42
>>>
and here's the fancy version, where you "make it look like" you're assigning the special method as an instance attribute (while behind the scene it still goes to the class of course):
>>> class Sophisticated(Individualist):
... def __setattr__(self, n, v):
... if n[:2]=='__' and n[-2:]=='__' and n!='__class__':
... setattr(self.__class__, n, v)
... else:
... object.__setattr__(self, n, v)
...
>>> c = Sophisticated()
>>> d = Sophisticated()
>>> c.__int__ = lambda self: 54
>>> d.__int__ = lambda self: 88
>>> int(c)
54
>>> int(d)
88
The only recourse that works for new-style classes is to have a method on the class that calls the attribute on the instance (if it exists):
class A(object):
def __int__(self):
if '__int__' in self.__dict__:
return self.__int__()
raise ValueError
a = A()
a.__int__ = lambda: 3
int(a)
Note that a.__int__ will not be a method (only functions that are attributes of the class will become methods) so self is not passed implicitly.
I have nothing to add about the specifics of overriding __int__. But I noticed one thing about your sample that bears discussing.
When you manually assign new methods to an object, "self" is not automatically passed in. I've modified your sample code to make my point clearer:
class A(object): pass
a = A()
a.foo = lambda self: 3
a.foo()
If you run this code, it throws an exception because you passed in 0 arguments to "foo" and 1 is required. If you remove the "self" it works fine.
Python only automatically prepends "self" to the arguments if it had to look up the method in the class of the object and the function it found is a "normal" function. (Examples of "abnormal" functions: class methods, callable objects, bound method objects.) If you stick callables in to the object itself they won't automatically get "self".
If you want self there, use a closure.

What is the dfifference between instance dict and class dict

I was reading the python descriptors and there was one line there
Python first looks for the member in the instance dictionary. If it's
not found, it looks for it in the class dictionary.
I am really confused what is instance dict and what is class dictionary
Can anyone please explain me with code what is that
I was thinking of them as same
An instance dict holds a reference to all objects and values assigned to the instance, and the class level dict holds all references at the class namespace.
Take the following example:
>>> class A(object):
... def foo(self, bar):
... self.zoo = bar
...
>>> i = A()
>>> i.__dict__ # instance dict is empty
{}
>>> i.foo('hello') # assign a value to an instance
>>> i.__dict__
{'zoo': 'hello'} # this is the instance level dict
>>> i.z = {'another':'dict'}
>>> i.__dict__
{'z': {'another': 'dict'}, 'zoo': 'hello'} # all at instance level
>>> A.__dict__.keys() # at the CLASS level, only holds items in the class's namespace
['__dict__', '__module__', 'foo', '__weakref__', '__doc__']
I think, you can understand with this example.
class Demo(object):
class_dict = {} # Class dict, common for all instances
def __init__(self, d):
self.instance_dict = d # Instance dict, different for each instance
And it's always possible to add instance attribute on the fly like this: -
demo = Demo({1: "demo"})
demo.new_dict = {} # A new instance dictionary defined just for this instance
demo2 = Demo({2: "demo2"}) # This instance only has one instance dictionary defined in `init` method
So, in the above example, demo instance has now 2 instance dictionary - one added outside the class, and one that is added to each instance in __init__ method. Whereas, demo2 instance has just 1 instance dictionary, the one added in __init__ method.
Apart from that, both the instances have a common dictionary - the class dictionary.
Those dicts are the internal way of representing the object or class-wide namespaces.
Suppose we have a class:
class C(object):
def f(self):
print "Hello!"
c = C()
At this point, f is a method defined in the class dict (f in C.__dict__, and C.f is an unbound method in terms of Python 2.7).
c.f() will make the following steps:
look for f in c.__dict__ and fail
look for f in C.__dict__ and succeed
call C.f(c)
Now, let's do a trick:
def f_french():
print "Bonjour!"
c.f = f_french
We've just modified the object's own dict. That means, c.f() will now print Bounjour!. This does not affect the original class behaviour, so that other C's instances will still speak English.
Class dict is shared among all the instances (objects) of the class, while each instance (object) has its own separate copy of instance dict.
You can define attributes separately on a per instance basis rather than for the whole class
For eg.
class A(object):
an_attr = 0
a1 = A()
a2 = A()
a1.another_attr = 1
Now a2 will not have another_attr. That is part of the instance dict rather than the class dict.
Rohit Jain has the simplest python code to explain this quickly. However, understanding the same ideas in Java can be useful, and there is much more information about class and instance variables here

Recursively walking a Python inheritance tree at run-time

I'm writing some serialization/deserialization code in Python that will read/write an inheritance hierarchy from some JSON. The exact composition will not be known until the request is sent in.
So, I deem the elegant solution to recursively introspect the Python class hierarchy to be emitted and then, on the way back up through the tree, install the correct values in a Python basic type.
E.g.,
A
|
|\
| \
B C
If I call my "introspect" routine on B, it should return a dict that contains a mapping from all of A's variables to their values, as well as B's variables and their values.
As it now stands, I can look through B.__slots__ or B.__dict__, but I only can pull out B's variable names from there.
How do I get the __slots__/__dict__ of A, given only B? (or C).
I know that python doesn't directly support casting like C++ & its descendants do-
You might try using the type.mro() method to find the method resolution order.
class A(object):
pass
class B(A):
pass
class C(A):
pass
a = A()
b = B()
c = C()
>>> type.mro(type(b))
[<class '__main__.B'>, <class '__main__.A'>, <type 'object'>]
>>> type.mro(type(c))
[<class '__main__.C'>, <class '__main__.A'>, <type 'object'>]
or
>>> type(b).mro()
Edit: I was thinking you wanted to do something like this...
>>> A = type("A", (object,), {'a':'A var'}) # create class A
>>> B = type("B", (A,), {'b':'B var'}) # create class B
>>> myvar = B()
def getvars(obj):
''' return dict where key/value is attribute-name/class-name '''
retval = dict()
for i in type(obj).mro():
for k in i.__dict__:
if not k.startswith('_'):
retval[k] = i.__name__
return retval
>>> getvars(myvar)
{'a': 'A', 'b': 'B'}
>>> for i in getvars(myvar):
print getattr(myvar, i) # or use setattr to modify the attribute value
A Var
B Var
Perhaps you could clarify what you are looking for a bit further?
At the moment your description doesn't describe Python at all. Let's assume that in your example A, B and C are the names of the classes:
class A(object) :
... def __init__(self) :
... self.x = 1
class B(A) :
... def __init__(self) :
... A.__init__(self)
... self.y = 1
Then a runtime instance could be created as:
b = B()
If you look at the dictionary of the runtime object then it has no distinction between its own variables and variables belonging to its superclass. So for example :
dir(b)
[ ... snip lots of double-underscores ... , 'x', 'y']
So the direct answer to your question is that it works like that already, but I suspect that is not very helpful to you. What does not show up is methods as they are entries in the namespace of the class, while variables are in the namespace of the object. If you want to find methods in superclasses then use the mro() call as described in the earlier reply and then look through the namespaces of the classes in the list.
While I was looking around for simpler ways to do JSON serialisation I found some interesting things in the pickle module. One suggestion is that you might want to pickle / unpickle objects rather than write your own to traverse the hieracrchy. The pickle output is an ASCII stream and it may be easier for you to convert that back and forth to JSON. There are some starting points in PEP 307.
The other suggestion is to take a look at the __reduce__ method, try it on the objects that you want to serialise as it may be what you are looking for.
If you only need a tree (not diamond shaped inheritance), there is a simple way to do it. Represent the tree by a nested list of branch [object, [children]] and leaves [object, [[]]].
Then, by defining the recursive function:
def classTree(cls): # return all subclasses in form of a tree (nested list)
return [cls, [[b for c in cls.__subclasses__() for b in classTree(c)]]]
You can get the inheritance tree:
class A():
pass
class B(A):
pass
class C(B):
pass
class D(C):
pass
class E(B):
pass
>>> classTree(A)
[<class 'A'>, [[<class 'B'>, [[<class 'C'>, [[<class 'D'>, [[]]]], <class 'E'>, [[]]]]]]]
Which is easy to serialize since it's only a list. If you want only the names, replace cls by cls.__name__.
For deserialisation, you have to get your class back from text. Please provide details in your question if you want more help for this.

Python Class Members Initialization

I have just recently battled a bug in Python. It was one of those silly newbie bugs, but it got me thinking about the mechanisms of Python (I'm a long time C++ programmer, new to Python). I will lay out the buggy code and explain what I did to fix it, and then I have a couple of questions...
The scenario: I have a class called A, that has a dictionary data member, following is its code (this is simplification of course):
class A:
dict1={}
def add_stuff_to_1(self, k, v):
self.dict1[k]=v
def print_stuff(self):
print(self.dict1)
The class using this code is class B:
class B:
def do_something_with_a1(self):
a_instance = A()
a_instance.print_stuff()
a_instance.add_stuff_to_1('a', 1)
a_instance.add_stuff_to_1('b', 2)
a_instance.print_stuff()
def do_something_with_a2(self):
a_instance = A()
a_instance.print_stuff()
a_instance.add_stuff_to_1('c', 1)
a_instance.add_stuff_to_1('d', 2)
a_instance.print_stuff()
def do_something_with_a3(self):
a_instance = A()
a_instance.print_stuff()
a_instance.add_stuff_to_1('e', 1)
a_instance.add_stuff_to_1('f', 2)
a_instance.print_stuff()
def __init__(self):
self.do_something_with_a1()
print("---")
self.do_something_with_a2()
print("---")
self.do_something_with_a3()
Notice that every call to do_something_with_aX() initializes a new "clean" instance of class A, and prints the dictionary before and after the addition.
The bug (in case you haven't figured it out yet):
>>> b_instance = B()
{}
{'a': 1, 'b': 2}
---
{'a': 1, 'b': 2}
{'a': 1, 'c': 1, 'b': 2, 'd': 2}
---
{'a': 1, 'c': 1, 'b': 2, 'd': 2}
{'a': 1, 'c': 1, 'b': 2, 'e': 1, 'd': 2, 'f': 2}
In the second initialization of class A, the dictionaries are not empty, but start with the contents of the last initialization, and so forth. I expected them to start "fresh".
What solves this "bug" is obviously adding:
self.dict1 = {}
In the __init__ constructor of class A. However, that made me wonder:
What is the meaning of the "dict1 = {}" initialization at the point of dict1's declaration (first line in class A)? It is meaningless?
What's the mechanism of instantiation that causes copying the reference from the last initialization?
If I add "self.dict1 = {}" in the constructor (or any other data member), how does it not affect the dictionary member of previously initialized instances?
EDIT: Following the answers I now understand that by declaring a data member and not referring to it in the __init__ or somewhere else as self.dict1, I'm practically defining what's called in C++/Java a static data member. By calling it self.dict1 I'm making it "instance-bound".
What you keep referring to as a bug is the documented, standard behavior of Python classes.
Declaring a dict outside of __init__ as you initially did is declaring a class-level variable. It is only created once at first, whenever you create new objects it will reuse this same dict. To create instance variables, you declare them with self in __init__; its as simple as that.
When you access attribute of instance, say, self.foo, python will first find 'foo' in self.__dict__. If not found, python will find 'foo' in TheClass.__dict__
In your case, dict1 is of class A, not instance.
#Matthew : Please review the difference between a class member and an object member in Object Oriented Programming. This problem happens because of the declaration of the original dict makes it a class member, and not an object member (as was the original poster's intent.) Consequently, it exists once for (is shared accross) all instances of the class (ie once for the class itself, as a member of the class object itself) so the behaviour is perfectly correct.
Pythons class declarations are executed as a code block and any local variable definitions (of which function definitions are a special kind of) are stored in the constructed class instance. Due to the way attribute look up works in Python, if an attribute is not found on the instance the value on the class is used.
The is an interesting article about the class syntax on the history of Python blog.
If this is your code:
class ClassA:
dict1 = {}
a = ClassA()
Then you probably expected this to happen inside Python:
class ClassA:
__defaults__['dict1'] = {}
a = instance(ClassA)
# a bit of pseudo-code here:
for name, value in ClassA.__defaults__:
a.<name> = value
As far as I can tell, that is what happens, except that a dict has its pointer copied, instead of the value, which is the default behaviour everywhere in Python. Look at this code:
a = {}
b = a
a['foo'] = 'bar'
print b

Categories