This is an unusual question, but I'd like to dynamically generate the __slots__ attribute of the class based on whatever attributes I happened to have added to the class.
For example, if I have a class:
class A(object):
one = 1
two = 2
__slots__ = ['one', 'two']
I'd like to do this dynamically rather than specifying the arguments by hand, how would I do this?
At the point you're trying to define slots, the class hasn't been built yet, so you cannot define it dynamically from within the A class.
To get the behaviour you want, use a metaclass to introspect the definition of A and add a slots attribute.
class MakeSlots(type):
def __new__(cls, name, bases, attrs):
attrs['__slots__'] = attrs.keys()
return super(MakeSlots, cls).__new__(cls, name, bases, attrs)
class A(object):
one = 1
two = 2
__metaclass__ = MakeSlots
One very important thing to be aware of -- if those attributes stay in the class, the __slots__ generation will be useless... okay, maybe not useless -- it will make the class attributes read-only; probably not what you want.
The easy way is to say, "Okay, I'll initialize them to None, then let them disappear." Excellent! Here's one way to do that:
class B(object):
three = None
four = None
temp = vars() # get the local namespace as a dict()
__slots__ = temp.keys() # put their names into __slots__
__slots__.remove('temp') # remove non-__slots__ names
__slots__.remove('__module__') # now remove the names from the local
for name in __slots__: # namespace so we don't get read-only
del temp[name] # class attributes
del temp # and get rid of temp
If you want to keep those initial values it takes a bit more work... here's one possible solution:
class B(object):
three = 3
four = 4
def __init__(self):
for key, value in self.__init__.defaults.items():
setattr(self, key, value)
temp = vars()
__slots__ = temp.keys()
__slots__.remove('temp')
__slots__.remove('__module__')
__slots__.remove('__init__')
__init__.defaults = dict()
for name in __slots__:
__init__.defaults[name] = temp[name]
del temp[name]
del temp
As you can see, it is possible to do this without a metaclass -- but who wants all that boilerplate? A metaclass could definitely help us clean this up:
class MakeSlots(type):
def __new__(cls, name, bases, attrs):
new_attrs = {}
new_attrs['__slots__'] = slots = attrs.keys()
slots.remove('__module__')
slots.remove('__metaclass__')
new_attrs['__weakref__'] = None
new_attrs['__init__'] = init = new_init
init.defaults = dict()
for name in slots:
init.defaults[name] = attrs[name]
return super(MakeSlots, cls).__new__(cls, name, bases, new_attrs)
def new_init(self):
for key, value in self.__init__.defaults.items():
setattr(self, key, value)
class A(object):
__metaclass__ = MakeSlots
one = 1
two = 2
class B(object):
__metaclass__ = MakeSlots
three = 3
four = 4
Now all the tediousness is kept in the metaclass, and the actual class is easy to read and (hopefully!) understand.
If you need to have anything else in these classes besides attributes I strongly suggest you put whatever it is in a mixin class -- having them directly in the final class would complicate the metaclass even more.
Related
I want to replace string literals in my code, as I want to minimize risk of typos, especially in dict key sets:
a['typoh'] = 'this is bad'
I don't want to type things in twice (risk of a missed typo on the value)
I want it to be "trackable" by various IDEs (i.e. click thru to see where it is defined and escape completion).
Enums are out: 'E.a.name' to get 'a' is dumb.
I have been told this can be done with slots, but I can't figure out how without a little trickery. I can think of a few ways below:
This is an unacceptable answer:
class TwiceIsNotNice(object):
this_is_a_string = 'this_is_a_string'
... (five thousand string constants in)
this_has_a_hard_to_spot_typographical_error =
'this_has_a_had_to_spot_typographical_error'
... (five thousand more string constants)
A clear but annoying way is with a "Stringspace" class/object where the attributes are set via a string list passed in. This solves the minimized typo risk, is VERY easy to read, but has neither IDE trackability nor autocompletion. It's okay, but makes people complain (please don't complain here, I am simply showing how it could be done):
string_consts = Stringspace('a', 'b',...,'asdfasdfasdf')
print(string_consts.a)
... where:
class Stringspace(object):
def __init__(self, *strlist):
for s in strlist:
setattr(self, s, s)
Another way is to define a class using a sentinel object, setting the value in a post phase. This is okay, is trackable, presents itself as an actual class, allows for aliases, etc. But it requires an annoying extra call at the end of the class:
same = object()
class StrList(object):
this_is_a_strval = same
this_is_another_strval = same
this_gets_aliased = "to something else"
# This would of course could become a function
for attr in dir(StrList):
if getattr(StrList, attr) is same:
setattr(StrList, attr, attr)
print(StrList.a)
If this is what the slot magic is supposedly about, then I am disappointed, as one would have to actually instantiate an object:
class SlotEnum(object):
__slots__ = []
def __init__(self):
for k in self.__slots__:
setattr(self, k, k)
class Foo(SlotEnum):
__slots__ = ['a', 'b']
foo_enum_OBJECT = Foo()
print(foo_enum_OBJECT.a)
Enums are out: E.a.name to get a is dumb.
from enum import Enum, auto
class StrEnum(str, Enum):
"base class for Enum members to be strings matching the member's name"
def __repr__(self):
return '<%s.%s>' % (self.__class__.__name__, self.name)
def __str__(self):
return self.name
class E(StrEnum):
a = auto()
this_is_a_string = auto()
no_typo_here = auto()
>>> print(repr(E.a))
<E.a>
>>> print(E.a)
a
>>> print('the answer is: %s!' % E.a)
the answer is: a!
I found one solution at this external link using a custom meta class, for your class containing the string member variables:
Step 1 of 2: The custom meta class can be defined like this:
class MetaForMyStrConstants(type):
def __new__(metacls, cls, bases, classdict):
object_attrs = set(dir(type(cls, (object,), {})))
simple_enum_cls = super().__new__(metacls, cls, bases, classdict)
simple_enum_cls._member_names_ = set(classdict.keys()) - object_attrs
non_members = set()
for attr in simple_enum_cls._member_names_:
if attr.startswith('_') and attr.endswith('_'):
non_members.add(attr)
else:
setattr(simple_enum_cls, attr, attr)
simple_enum_cls._member_names_.difference_update(non_members)
return simple_enum_cls
Step 2 of 2: The class defining your strings can be defined like this (with dummy values, eg, empty tuples):
class MyStrConstants(metaclass=MetaForMyStrConstants):
ONE_LONG_STR = ()
ANOTHER_LONG_STR = ()
THE_REAL_LONGEST_STR = ()
Testing it out:
print (MyStrConstants.ONE_LONG_STR)
print (MyStrConstants.ANOTHER_LONG_STR)
print (MyStrConstants.THE_REAL_LONGEST_STR)
Output:
ONE_LONG_STR
ANOTHER_LONG_STR
THE_REAL_LONGEST_STR
I want to share some information between all the instances of some class and all it's derived classes.
class Base():
cv = "some value" # information I want to share
def print_cv(self, note):
print("{}: {}".format(note, self.cv))
#classmethod
def modify_cv(cls, new_value):
# do some class-specific stuff
cls.cv = new_value
class Derived(Base):
pass
b = Base()
d = Derived()
b.print_cv("base")
d.print_cv("derived")
Output is as expected (instances of both classes see correct class attribute):
base: some value
derived: some value
I can change the value of this class attribute and everything is still fine:
# Base.cv = "new value"
b.modify_cv("new value")
b.print_cv("base") # -> base: new value
d.print_cv("derived") # -> derived: new value
So far so good. The problem is that the "connection" between Base and Derived classes can be broken if I access cv via derived class:
# Derived.cv = "derived-specific value"
d.modify_cv("derived-specific value")
b.print_cv("base") # -> base: new value
d.print_cv("derived") # -> derived: derived-specific value
This behavior is expected, but this is not what I want!
I understand why a and b see different values of cv - because they are instances of different classes. I have overridden cv value in derived class and now derived class behaves differently, I've used this feature many times.
But for my current task I need a and b always use the same cv!
UPDATE
I have updated the question and now it better describes the real-life situation. Actually I did not modify cv value like this:
Base.cv = "new value"
modifications were done in some classmethods (actually all these class methods were implemented in Base class).
And now solution became obvious, I just need to modify the method slightly:
class Base():
#classmethod
def modify_cv(cls, new_value):
#cls.cv = new_value
Base.cv = new_value
Thank you all for discussion and ideas (in the begining I was going to use getters/setters and module-level attribute)
classmethod is useful when you need to know which class is calling the method, but if you want the same behaviour regardless of the class that's calling the method, you could use staticmethod instead. You can then access the class variable simply through the base class's name with Base.cv:
class Base:
cv = "some value" # information I want to share
def print_cv(self, note):
print("{}: {}".format(note, self.cv))
#staticmethod
def modify_cv(new_value):
Base.cv = new_value
You can still call it on any instance or subclass, but it always changes Base.cv:
>>> b = Base()
>>> d = Derived()
>>> Base.cv == Derived.cv == b.cv == d.cv == "some value"
True
>>> d.modify_cv("new value")
>>> Base.cv == Derived.cv == b.cv == d.cv == "new value"
True
Update:
If you still need access to the class for other reasons, use classmethod with the cls argument as you did before, but still access the base class's variable through Base.cv rather than cls.cv:
#classmethod
def modify_cv(cls, new_value):
do_stuff_with(cls)
Base.cv = new_value
You have to override __setattr__ on the class of the class, i.e. a metaclass:
class InheritedClassAttributesMeta(type):
def __setattr__(self, key, value):
cls = None
if not hasattr(self, key):
# The attribute doesn't exist anywhere yet,
# so just set it here
cls = self
else:
# Find the base class that's actually storing it
for cls in self.__mro__:
if key in cls.__dict__:
break
type.__setattr__(cls, key, value)
class Base(metaclass=InheritedClassAttributesMeta):
cv = "some value"
class Derived(Base):
pass
print(Derived.cv)
Derived.cv = "other value"
print(Base.cv)
Using metaclasses is often overkill, so directly specifying Base might be better.
To avoid unwanted side effects with this solution, consider checking first if key is in some predefined set of attribute names before changing the behaviour.
In Python, inside a method, you can use the bare __class__ variable name to mean the actual class the method is defined in.
This differs from the cls arg that is passed to classmethods, or self.__class__ on regular methods, that will refer to a subclass if the method is invoked in a subclass. Thus, cls.attr = value would set the value on the subclass class' __dict__, and the attribute value will be independent on that subclass from that point on. This is what you are getting there.
Instead, you can use:
class MyClass:
cv = "value"
#classmethod # this is actually optional
def modify_cv(cls, new_value):
__class__.cv = new_value
__class__ is created automatically in Python 3 by
the mechanism that allows one to write
parameterless form of super
I am trying to automatically create some SQL tables from the definition of some Python classes, I tried using dir() but since it returns a Python Dictionary, it's not ordered so the definition order of the class members is lost.
Reading on the internet I found the following here
class OrderedClass(type):
#classmethod
def __prepare__(metacls, name, bases, **kwds):
return collections.OrderedDict()
def __new__(cls, name, bases, namespace, **kwds):
result = type.__new__(cls, name, bases, dict(namespace))
result.members = tuple(namespace)
return result
class A(metaclass=OrderedClass):
def one(self): pass
def two(self): pass
def three(self): pass
def four(self): pass
>>> A.members
('__module__', 'one', 'two', 'three', 'four')
I successfuly implemented a copy of it, and it appears to be doing what it should except that it's only saving the methods in the members variable, and I need to have also the class member variables.
Question:
How could I get a list of the member variables preserving their definition order?, I don't care about class methods, and I am actually ignoring them.
Note: The reason why the order is important is because the tables will have constraints that reference some of the table columns, and they must go after defining the column, but they are appearing before.
Edit: This is a sample class in my real program
class SQLTable(type):
#classmethod
def __prepare__(metacls, name, bases, **kwds):
return OrderedDict()
def __new__(cls, name, bases, namespace, **kwds):
result = type.__new__(cls, name, bases, dict(namespace))
result.members = tuple(namespace)
return result
class AreaFisicoAmbiental(metaclass = SQLTable):
def __init__(self, persona, datos):
# edificacion
self.persona = persona
self.tipoEdificacion = datos[0]
self.tipoDeParedes = datos[1]
self.detallesTipoDeParedes = datos[2]
self.tipoDeTecho = datos[3]
self.detallesTipoDeTecho = datos[4]
self.tipoDePiso = datos[5]
self.detallesTipoDePiso = datos[6]
# ambientes
self.problemaDeInfraestructura = datos[7]
self.detallesProblemaDeInfraestructura = datos[9]
self.condicionDeTenencia = datos[10]
self.detallesCondicionDeTenencia = datos[11]
self.sala = toBool(datos[12])
self.comedor = toBool(datos[13])
self.baño = toBool(datos[14])
self.porche = toBool(datos[15])
self.patio = toBool(datos[16])
self.lavandero = toBool(datos[17])
self.habitaciones = toInt(datos[19])
# servicios básicos
self.aguasServidas = toBool(datos[21])
self.aguaPotable = toBool(datos[22])
self.luz = toBool(datos[23])
self.gas = datos[24]
self.internet = toBool(datos[25])
Doing
print(AreaFisicoAmbiental.members)
Outputs:
('__module__', '__qualname__', '__init__')
Variable names are in spanish because their names will be used as the table column names, and also as the labels for a web application that will be generated from the database structure.
I know that Django does something like this, but I already have my database inspector which does the opposite thing, so know I need a Django like functionality to use my generator.
Updated
As I commented, I think you're probably confusing instance attributes with class attributes and really want to keep track of the latter. Instance attributes are dynamic and can be added, changed, or removed at any time, so trying to do this with a metaclass like shown in your question won't work (and different instances may have a different group of them defined).
You may be able to keep track of their creation and deletion by overloading a couple of the class's special methods, namely __setattr__() and __delattr__() and storing their effects in a private data member which is an OrderedSet. Do so will keep track of what they are and preserve the order in which they were created.
Both of these methods will need to be careful not to operate upon the private data member itself.
That said, here's something illustrating such an implementation:
# -*- coding: iso-8859-1 -*-
# from http://code.activestate.com/recipes/576694
from orderedset import OrderedSet
class AreaFisicoAmbiental(object):
def __init__(self, persona, datos):
self._members = OrderedSet()
self.persona = persona
self.tipoEdificacion = datos[0]
self.tipoDeParedes = datos[1]
def __setattr__(self, name, value):
object.__setattr__(self, name, value)
if name != '_members':
self._members.add(name)
def __delattr__(self, name):
if name != '_members':
object.__delattr__(self, name)
self._members.discard(name)
def methodA(self, value1, value2): # add some members
self.attribute1 = value1
self.attribute2 = value2
def methodB(self):
del self.attribute1 # remove a member
if __name__ == '__main__':
a = AreaFisicoAmbiental('Martineau', ['de albañilería', 'vinilo'])
a.methodA('attribute1 will be deleted', 'but this one will be retained')
a.methodB() # deletes a.attribute1
a.attribute3 = 42 # add an attribute outside the class
print('current members of "a":')
for name in a._members:
print(' {}'.format(name))
Output:
current members of "a":
persona
tipoEdificacion
tipoDeParedes
attribute2
attribute3
A final note: It would be possible to create a metaclass that added these two methods automatically to client classes, which would make it easier to modify existing classes.
Maybe, python enum would be enough for the task. Indeed it supports stable order.
The basic implementation of DDL would look like this:
from enum import Enum
class Table1(Enum):
nombre = ''
edad = 0
sexo = True
...
then later you could do:
for prop in Table1:
print(prop)
this gives you
Table1.nombre
Table1.edad
Table1.sexo
if you need to construct a proper table definition you could use Table1.<field>.value:
>>> print(type(Table1.nombre.value))
<class 'str'>
>>> print(type(Table1.edad.value))
<class 'int'>
and so on. Using this technique you could even link some tables to others thus constructing a complete definition of a whole set of tables and their relationships.
As for data objects (e.g. a row in a table, or a row of a query results), here I think you don't any own ordering, you just need to maintain a link to a corresponding table class (from which the order can be restored, however I don't think it's such a requested option). So these classes could look like this:
class Table1Row(object):
_table = Table1
__slots__ = tuple(k.name for k Table1)
...
or simply
class ASpecificQueryResults(object):
__slots__ = (Table1.nombre.name, Table2.empresa.name,...)
probably you need a factory which would build row classes based on the query results and/or table definitions.
Edit probably the idea with __slots__ in *Row classes requires some more polish but this heavily depends on your actual needs.
P.S. Perhaps 'Table1.sexo' also should be an enum in our complicated times ;)
I have a chain of inheritance in Python, and I want each child class to be able to add on new custom parameters. Right now I'm doing this:
class A(object):
PARAM_NAMES = ['blah1']
...
class B(A):
PARAM_NAMES = A.PARAM_NAMES + ['blah2']
...
I'm wondering if there's a slicker method, though, without referencing A twice? Can't use super() because it's not within a method definition, afaik. I suppose I could use a class method, but that'd be annoying (since I really would want a property).
What's the right way to do this?
of coarse there is always black magic you can do ... but the question is just because you can ... should you?
class MyMeta(type):
items = []
def __new__(meta, name, bases, dct):
return super(MyMeta, meta).__new__(meta, name, bases, dct)
def __init__(cls, name, bases, dct):
MyMeta.items.extend(cls.items)
cls.items = MyMeta.items[:]
super(MyMeta, cls).__init__(name, bases, dct)
class MyKlass(object):
__metaclass__ = MyMeta
class A(MyKlass):
items=["a","b","c"]
class B(A):
items=["1","2","3"]
print A.items
print B.items
since this creates a copy it will not suffer from the same problem as the other solution
(please note that I dont really recommend doing this ... its just to show you can)
This may or may not be smart, but it's technically possible to use a metaclass for this. Unlike Joran's method, I use a property, so that it retains full dynamic nature (that is, if you modify any class's private _PARAM_NAMES list after defining the class, the corresponding PARAM_NAME property of every other derived class reflects that change). For this reason I put an add_param method on the base class.
Python 3 is assumed here, and the PARAM_NAMES property returns a set to avoid duplicate items.
class ParamNameBuilderMeta(type):
def __new__(mcl, name, bases, dct):
names = dct.get("PARAM_NAMES", [])
names = {names} if isinstance(names, str) else set(names)
dct["_PARAM_NAMES"] = names
dct["PARAM_NAMES"] = property(lambda s: type(s).PARAM_NAMES)
return super().__new__(mcl, name, bases, dct)
#property
def PARAM_NAMES(cls):
# collect unique list items ONLY from our classes in the MRO
return set().union(*(c._PARAM_NAMES for c in reversed(cls.__mro__)
if isinstance(c, ParamNameBuilderMeta)))
Usage:
class ParamNameBuilderBase(metaclass=ParamNameBuilderMeta):
#classmethod
def add_param(self, param_name):
self._PARAM_NAMES.add(param_name)
class A(ParamNameBuilderBase):
PARAM_NAMES = 'blah1'
class B(A):
PARAM_NAMES = 'blah1', 'blah2'
class C(B):
pass
Check to make sure it works on both classes and instances thereof:
assert C.PARAM_NAMES == {'blah1', 'blah2'}
assert C().PARAM_NAMES == {'blah1', 'blah2'}
Check to make sure it's still dynamic:
C.add_param('blah3')
assert C.PARAM_NAMES == {'blah1', 'blah2', 'blah3'}
The behavior you've described is actually quite specific. You've said that you
want each child class to be able to add on new custom paramters
But the way you've implemented it, this will result in unpredictable behaviour. Consider:
class A(object):
PARAM_NAMES = ['blah1']
class B(A):
PARAM_NAMES = A.PARAM_NAMES + ['blah2']
class C(A):pass
print(A.PARAM_NAMES)
print(B.PARAM_NAMES)
print(C.PARAM_NAMES)
A.PARAM_NAMES.append('oops')
print(C.PARAM_NAMES)
What we notice is that the classes that choose to add new parameters have a new reference to the parameter list, while ones that do not add new parameters have the same reference as their parent. Unless carefully controlled, this is unsafe behaviour.
It is more reliable to only use constants as class properties, or to redefine the list entirely each time (make it a tuple), which is not "slicker". Otherwise, I'd reccomend class methods, as you suggest, and making the property an instance variable
Currently __setattr__ only works for instance. Is there any similar method for class? I am asking this question because I want to collect the list of defined attribute in order when user define it in class as below:
class CfgObj(object):
_fields = []
def __setattr__(self, name, value):
self._fields.append([name, value])
object.__setattr__(self, name, value)
class ACfg(CfgObj):
setting1 = Field(str, default='set1', desc='setting1 ...')
setting2 = Field(int, default=5, desc='setting2...')
I know the above code will not work as expected because the __setattr__ only called by instance as below:
acfg = ACfg()
acfg.c = 1
acfg._fields == [['c', 1]]
So, is there any equivalent __setattr__ for python class? The main purpose is to collect the define attribute in order when user define it in class.
Yes, but that's not how you want to do it.
class MC(type):
def __init__(cls, name, bases, dct):
print dct
super(MC, cls).__init__(name, bases, dct)
class C(object):
__metaclass__ = MC
foo = 42
If you define __setattr__() on the metaclass of a class, it will be called when setting attributes on the class, but only after creating the class:
>>> class Meta(type):
... def __setattr__(cls, name, value):
... print "%s=%r" % (name, value)
...
>>> class A(object):
... __metaclass__ = Meta
...
>>> A.a = 1
a=1
But it won't work at the time of class definition, so it's probably not what you want.
Getting the class attributes in the metaclass __init__() works, but you loose the order of definition (and multiple definitions as well).
What I would do to solve your problem - but not your question - is to set the timestamp of the field creation create a counter of Field objects and set the current value of the counter to the created one:
class Field(object):
count = 0
def __init__(self, value, default=None, desc=None):
self.value = value
self.default = default
self.desc = desc
# Here comes the magic
self.nth = Field.count
Field.count += 1
# self.created_at = time.time()
Then I would create a method for returning all fields ordered by its counter value:
class CfgObj(object):
def params(self):
ns = dir(self)
fs = [getattr(self, field)
for field in ns
if isinstance(getattr(self, field), Field)]
# fs = sorted(fs, key=lambda f: f.created_at)
fs = sorted(fs, key=lambda f: f.nth)
return fs
Its usage is intuitive:
class ACfg(CfgObj):
setting1 = Field(str, default='set1', desc='setting1 ...')
setting2 = Field(int, default=5, desc='setting2...')
print ACfg().params()
Clearly the fields are ordered by time of object creation, not field creation, but it can be enough for you. Is it?