I am working with an ORM that accepts classes as input and I need to be able to feed it some dynamically generated classes. Currently, I am doing something like this contrived example:
def make_cls(_param):
def Cls(object):
param = _param
return Cls
A, B = map(make_cls, ['A', 'B'])
print A().foo
print B().foo
While this works fine, it feels off by a bit: for example, both classes print as <class '__main__.Cls'> on the repl. While the name issue is not a big deal (I think I could work around it by setting __name__), I wonder if there are other things I am not aware of.
So my question is: is there a better way to create classes dynamically or is my example mostly fine already?
What is class? It is just an instance of type. For example:
>>> A = type('A', (object,), {'s': 'i am a member', 'double_s': lambda self: self.s * 2})
>>> a = A()
>>> a
<__main__.A object at 0x01229F50>
>>> a.s
'i am a member'
>>> a.double_s()
'i am a memberi am a member'
From the doc:
type(name, bases, dict)
Return a new type object. This is essentially a dynamic form of the class statement.
Related
I would like to create a dictionary which based on a string keyword returns a subclass of Foo which I can later instantiate. Is this possible or is it an incorrect approach to the problem?
Pseudo code:
subclasses_of_foo = {"foo1": Foo1, "foo2": Foo2}
subclass_of_foo = subclasses_of_foo["foo1"]
instance = subclass_of_foo()
Sure, you can do that. Give it a go.
Your approach is entirely correct and works. Classes are just objects, just like everything else in Python. They can be stored as values in a dictionary.
Demo:
>>> class Foo:
... pass
...
>>> class Bar:
... pass
...
>>> classmap = {'foo': Foo, 'bar': Bar}
>>> classmap['foo']
<class __main__.Foo at 0x107eee1f0>
>>> classmap['foo']()
<__main__.Foo instance at 0x107eefcb0>
Note that duck typing is something else entirely; it is the practice of treating any object as the correct type provided it implements the attributes and methods you expected (if it walks like a duck, it is a duck).
What's the easiest way to create a naked object that I can assign attributes to?
The specific use case is: I'm doing various operations on a Django object instance, but sometimes the instance is None (there is on instance). In this case I'd like to create the simplest possible fake object such that I can assign values to its attributes (eg. myobject.foo = 'bar').
Basically I'm looking for the Python equivalent of this piece of Javascript:
myobject = {}
myobject.foo = 'bar'
I know I can use a mock object/library for this, but I'm hoping for a very simple solution (as simple as the Javascript above). Is there a way to create a naked object instance? Something like:
myobject = object()
myobject.foo = 'bar'
You need to create a simple class first:
class Foo(object):
pass
myobject = Foo()
myobject.foo = 'bar'
You can make it a one-liner like this:
myobject = type("Foo", (object,), {})()
myobject.foo = 'bar'
The call to type functions identically to the previous class statement.
If you want to be really minimal...
myobject = type("", (), {})()
The key is that the built-in types (such as list and object) don't support user-defined attributes, so you need to create a type using either a class statement or a call to the 3-parameter version of type.
If you're using Python >= 3.3 you could always use SimpleNamespace; which is included in the Python types module.
SimpleNamespace is great because you also get a repr and equivalency testing for free; both of which might come in handy even for a minimalist object.
Translating the JavaScript in the OP’s question would look like:
from types import SimpleNamespace
myobject = SimpleNamespace() # myobject = {}
myobject.foo = 'bar'
You can also use keyword arguments when instantiating SimpleNamespace. These arguments will become attributes on the instantiated SimpleNamespace:
p = SimpleNamespace(name='gary')
p.age = 32
p # => namespace(age=32, name='gary')
So a quick and easy way to turn a dictionary into a SimpleNamespace object —provided the dictionary keys are proper identifiers— is as simple as:
d = {
'name': 'gary',
'age': 33 # had a birthday.
}
p = SimpleNamespace(**d)
Python >= 3.7 has dataclasses which are basically “mutable named tuples”. This could be something you may want to use if you have a lot of data objects.
Use the Bunch module:
sudo pip install bunch
A bunch is a dictionary that allows to access its content via the dict.key syntax.
And then like that:
from bunch import Bunch
b = Bunch()
b.foo = "Bar"
b["foo2"] = "Bar2"
print b
>> Bunch(foo='Bar', foo2='Bar2')
b["foo"] = "Baz"
print b
>> Bunch(foo='Baz', foo2='Bar2')
I'm coming here very late, but I'm surprised nobody has mentioned namedtuples, which accomplish this kind of thing:
Foo = namedtuple('Foo', ['x'])
f = Foo(x='myattribute')
f.x
For Python 3,
class Obj: pass
o = Obj()
o.name = 'gary'
o.age = 32
o
# <__main__.Obj at 0x17235ca65c0>
o.__dict__
# {'name': 'gary', 'age': 32}
class NakedObject(object):
pass
myobject = NakedObject()
myobject.foo = 'bar'
Functions can have attributes in Python 3. Compared to a naked class, you can save one whole line of code.
naked = lambda: None
naked.foo = 'bar'
You would need to subclass object first like this...
class Myobject(object):
pass
myobject1 = Myobject()
myobject1.foo = 'bar'
Perhaps you are looking for something like this:
myobject={}
myobject['foo']='bar'
then it can be called like:
print myobject['foo']
or you could use a class object for this:
class holder(object):
pass
then you can use something like this:
hold=holder()
hold.myobject='bar'
print hold.myobject
You should probably just use a dict, as per #PsychicOak's answer.
However, if you really want an object you can manipulate, try:
class FooClass(object): pass
You can then assign attributes on FooClass itself, or on instances, as you wish.
I usually prefer to create a null object for my class:
class User(Model):
username = CharField()
password = CharField()
NONE_USER = User(username='', password='')
Then I use it where I would use your naked object.
In some cases extending a dict can help you
like:
class SpecificModelData(dict):
pass
...
class Payload(dict):
... enter code here
why a dict? it works nicely together with serializers.
Why new class? - it gives you a name and a new type
I am trying to add class attributes dynamically, but not at the instance level. E.g. what I can do manually as:
class Foo(object):
a = 1
b = 2
c = 3
I'd like to be able to do with:
class Foo(object):
dct = {'a' : 1, 'b' : 2, 'c' : 3}
for key, val in dct.items():
<update the Foo namespace here>
I'd like to be able to do this without a call to the class from outside the class (so it's portable), or without additional classes/decorators. Is this possible?
Judging from your example code, you want to do this at the same time you create the class. In this case, assuming you're using CPython, you can use locals().
class Foo(object):
locals().update(a=1, b=2, c=3)
This works because while a class is being defined, locals() refers to the class namespace. It's implementation-specific behavior and may not work in later versions of Python or alternative implementations.
A less dirty-hacky version that uses a class factory is shown below. The basic idea is that your dictionary is converted to a class by way of the type() constructor, and this is then used as the base class for your new class. For convenience of defining attributes with a minimum of syntax, I have used the ** convention to accept the attributes.
def dicty(*bases, **attrs):
if not bases:
bases = (object,)
return type("<from dict>", bases, attrs)
class Foo(dicty(a=1, b=2, c=3)):
pass
# if you already have the dict, use unpacking
dct = dict(a=1, b=2, c=3)
class Foo(dicty(**dct)):
pass
This is really just syntactic sugar for calling type() yourself. This works fine, for instance:
class Foo(type("<none>", (object,), dict(a=1, b=2, c=3))):
pass
Do you mean something like this:
def update(obj, dct):
for key, val in dct.items():
obj.setattr(key, val)
Then just go
update(Foo, {'a': 1, 'b': 2, 'c': 3})
This works, because a class is just an object too ;)
If you want to move everything into the class, then try this:
class Foo(object):
__metaclass__ = lambda t, p, a: return type(t, p, a['dct'])
dct = {'a': 1, 'b': 2, 'c': 3}
This will create a new class, with the members in dct, but all other attributes will not be present - so, you want to alter the last argument to type to include the stuff you want. I found out how to do this here: What is a metaclass in Python?
The accepted answer is a nice approach. However, one downside is you end up with an additional parent object in the MRO inheritance chain that isn't really necessary and might even be confusing:
>>> Foo.__mro__
(<class '__main__.Foo'>, <class '__main__.<from dict>'>, <class 'object'>)
Another approach would be to use a decorator. Like so:
def dicty(**attrs):
def decorator(cls):
vars(cls).update(**attrs)
return cls
return decorator
#dicty(**some_class_attr_namespace)
class Foo():
pass
In this way, you avoid an additional object in the inheritance chain. The #decorator syntax is just a pretty way of saying:
Foo = dicty(a=1, b=2, c=3)(Foo)
I have code which contains the following two lines in it:-
instanceMethod = new.instancemethod(testFunc, None, TestCase)
setattr(TestCase, testName, instanceMethod)
How could it be re-written without using the "new" module? Im sure new style classes provide some kind of workaround for this, but I am not sure how.
There is a discussion that suggests that in python 3, this is not required. The same works in Python 2.6
http://mail.python.org/pipermail/python-list/2009-April/531898.html
See:
>>> class C: pass
...
>>> c=C()
>>> def f(self): pass
...
>>> c.f = f.__get__(c, C)
>>> c.f
<bound method C.f of <__main__.C instance at 0x10042efc8>>
>>> c.f
<unbound method C.f>
>>>
Reiterating the question for every one's benefit, including mine.
Is there a replacement in Python3 for new.instancemethod? That is, given an arbitrary instance (not its class) how can I add a new appropriately defined function as a method to it?
So following should suffice:
TestCase.testFunc = testFunc.__get__(None, TestCase)
You can replace "new.instancemethod" by "types.MethodType":
from types import MethodType as instancemethod
class Foo:
def __init__(self):
print 'I am ', id(self)
def bar(self):
print 'hi', id(self)
foo = Foo() # prints 'I am <instance id>'
mm = instancemethod(bar, foo) # automatically uses foo.__class__
mm() # prints 'I have been bound to <same instance id>'
foo.mm # traceback because no 'field' created in foo to hold ref to mm
foo.mm = mm # create ref to bound method in foo
foo.mm() # prints 'I have been bound to <same instance id>'
This will do the same:
>>> Testcase.testName = testFunc
Yeah, it's really that simple.
Your line
>>> instanceMethod = new.instancemethod(testFunc, None, TestCase)
Is in practice (although not in theory) a noop. :) You could just as well do
>>> instanceMethod = testFunc
In fact, in Python 3 I'm pretty sure it would be the same in theory as well, but the new module is gone so I can't test it in practice.
To confirm that it's not needed to use new.instancemthod() at all since Python v2.4, here's an example how to replace an instance method. It's also not needed to use descriptors (even though it works).
class Ham(object):
def spam(self):
pass
h = Ham()
def fake_spam():
h._spam = True
h.spam = fake_spam
h.spam()
# h._spam should be True now.
Handy for unit testing.
I'm writing some serialization/deserialization code in Python that will read/write an inheritance hierarchy from some JSON. The exact composition will not be known until the request is sent in.
So, I deem the elegant solution to recursively introspect the Python class hierarchy to be emitted and then, on the way back up through the tree, install the correct values in a Python basic type.
E.g.,
A
|
|\
| \
B C
If I call my "introspect" routine on B, it should return a dict that contains a mapping from all of A's variables to their values, as well as B's variables and their values.
As it now stands, I can look through B.__slots__ or B.__dict__, but I only can pull out B's variable names from there.
How do I get the __slots__/__dict__ of A, given only B? (or C).
I know that python doesn't directly support casting like C++ & its descendants do-
You might try using the type.mro() method to find the method resolution order.
class A(object):
pass
class B(A):
pass
class C(A):
pass
a = A()
b = B()
c = C()
>>> type.mro(type(b))
[<class '__main__.B'>, <class '__main__.A'>, <type 'object'>]
>>> type.mro(type(c))
[<class '__main__.C'>, <class '__main__.A'>, <type 'object'>]
or
>>> type(b).mro()
Edit: I was thinking you wanted to do something like this...
>>> A = type("A", (object,), {'a':'A var'}) # create class A
>>> B = type("B", (A,), {'b':'B var'}) # create class B
>>> myvar = B()
def getvars(obj):
''' return dict where key/value is attribute-name/class-name '''
retval = dict()
for i in type(obj).mro():
for k in i.__dict__:
if not k.startswith('_'):
retval[k] = i.__name__
return retval
>>> getvars(myvar)
{'a': 'A', 'b': 'B'}
>>> for i in getvars(myvar):
print getattr(myvar, i) # or use setattr to modify the attribute value
A Var
B Var
Perhaps you could clarify what you are looking for a bit further?
At the moment your description doesn't describe Python at all. Let's assume that in your example A, B and C are the names of the classes:
class A(object) :
... def __init__(self) :
... self.x = 1
class B(A) :
... def __init__(self) :
... A.__init__(self)
... self.y = 1
Then a runtime instance could be created as:
b = B()
If you look at the dictionary of the runtime object then it has no distinction between its own variables and variables belonging to its superclass. So for example :
dir(b)
[ ... snip lots of double-underscores ... , 'x', 'y']
So the direct answer to your question is that it works like that already, but I suspect that is not very helpful to you. What does not show up is methods as they are entries in the namespace of the class, while variables are in the namespace of the object. If you want to find methods in superclasses then use the mro() call as described in the earlier reply and then look through the namespaces of the classes in the list.
While I was looking around for simpler ways to do JSON serialisation I found some interesting things in the pickle module. One suggestion is that you might want to pickle / unpickle objects rather than write your own to traverse the hieracrchy. The pickle output is an ASCII stream and it may be easier for you to convert that back and forth to JSON. There are some starting points in PEP 307.
The other suggestion is to take a look at the __reduce__ method, try it on the objects that you want to serialise as it may be what you are looking for.
If you only need a tree (not diamond shaped inheritance), there is a simple way to do it. Represent the tree by a nested list of branch [object, [children]] and leaves [object, [[]]].
Then, by defining the recursive function:
def classTree(cls): # return all subclasses in form of a tree (nested list)
return [cls, [[b for c in cls.__subclasses__() for b in classTree(c)]]]
You can get the inheritance tree:
class A():
pass
class B(A):
pass
class C(B):
pass
class D(C):
pass
class E(B):
pass
>>> classTree(A)
[<class 'A'>, [[<class 'B'>, [[<class 'C'>, [[<class 'D'>, [[]]]], <class 'E'>, [[]]]]]]]
Which is easy to serialize since it's only a list. If you want only the names, replace cls by cls.__name__.
For deserialisation, you have to get your class back from text. Please provide details in your question if you want more help for this.