How to overwrite __init__ while saving the old __init__ inheriting from OrderedDict - python

class AppendiveDict(c.OrderedDict):
def __init__(self,func,*args):
c.OrderedDict.__init__(args)
def __setitem__(self, key, value,):
if key in self:
self[key] = func(self[key])
else:
c.OrderedDict.__setitem__(self,key,value)
I have this class that is supposed to apply the func function to items already in the dictionary. If I do c.OrderedDict.__init__(args) it tells me descriptor __init__ needs an ordereddict and not a tuple.
If I change c.OrderedDict.__init__(self) then an empty ordereddict shows up in the representation.

The solution is to pass both arguments, because c.OrderedDict.__init__ needs to know on which instance it's supposed to operate and with which arguments:
import collections as c
class AppendiveDict(c.OrderedDict):
def __init__(self, func, *args):
c.OrderedDict.__init__(self, args) # pass self AND args
Or to use super with args only (because super returns a bound method that already knows on which instance it is called):
class AppendiveDict(c.OrderedDict):
def __init__(self, func, *args):
super().__init__(args)

Related

How to specify Python class attributes with constructor keyword arguments

I'm trying to come up with a way to allow specification of any number of class attributes upon instantiation, very similar to a dictionary. Ideal use case:
>>> instance = BlankStruct(spam=0, eggs=1)
>>> instance.spam
0
>>> instance.eggs
1
where BlankStruct is defined as:
class BlankStruct(Specifiable):
#Specifiable.specifiable
def __init__(self, **kwargs):
pass
I was thinking of using a parent class decorator, but am lost in a mind-trip about whether to use instance methods, class methods or static methods (or possibly none of the above!). This is the best I've come up with so far, but the problem is that the attributes are applied to the class instead of the instance:
class Specifiable:
#classmethod
def specifiable(cls, constructor):
def constructor_wrapper(*args, **kwargs):
constructor(*args, **kwargs)
cls.set_attrs(**kwargs)
return constructor_wrapper
#classmethod
def set_attrs(cls, **kwargs):
for key in kwargs:
setattr(cls, key, kwargs[key])
How can I make such a parent class?
NOTE:
Yes, I know what I'm trying to do is bad practice. But sometimes you just have to do what your boss tells you.
Yes, you can do the following, however I do NOT recommend as this clearly goes against the explicit is better than implicit principle:
class BlankStruct:
def __init__(self, **attrs):
self.__dict__.update(**attrs)
def __getattr__(self, attr):
return self.__dict__.get(attr, None)
f = BlankStruct(spam=0, eggs=1)
A more complete response is available here and inspired this answer.
I would recommend being explicit in terms of the properties you want your class to have. Otherwise, you are left with a class that has a high degree of variability, which likely detracts for it's usefulness.
I believe you can do this without decorators:
class Specifiable:
def __init__(self, **kwargs):
for key, value in kwargs.items():
setattr(self, key, value)
class BlankStruct(Specifiable):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Do other stuff.

Python: Can a subclass of float take extra arguments in its constructor?

In Python 3.4, I'd like to create a subclass of float -- something that can be used in math and boolean operations like a float, but has other custom functionality and can receive an argument at initialization that controls that functionality. (Specifically, I wanted to have a custom __str__ and a parameter that is used in that method.)
However, I can't seem to get a subclass of float to have a functional two-argument constructor. Why? Is this simply a limitation on extending built-in types?
Example:
class Foo(float):
def __init__(self, value, extra):
super().__init__(value)
self.extra = extra
Now if I try Foo(1,2) I get:
TypeError: float() takes at most 1 argument (2 given)
Surprisingly, my new __init__'s arguments are enforced too, so if I do Foo(1) I get:
TypeError: __init__() missing 1 required positional argument: 'extra'
What's the deal here? I've done similar things with subtypes of list and was surprised it didn't work on float.
As float is immutable you have to overwrite __new__ as well. The following should do what you want:
class Foo(float):
def __new__(self, value, extra):
return float.__new__(self, value)
def __init__(self, value, extra):
float.__init__(value)
self.extra = extra
foo = Foo(1,2)
print(str(foo))
1.0
print(str(foo.extra))
2
See also Sub-classing float type in Python, fails to catch exception in __init__()
Both #cgogolin and #qvpham provide working answers. However, I reckon that float.__init__(value) within the __init__ method is irrelevant to the initialization of Foo. That is, it does nothing to initialize attributes of Foo. As such, it rather causes confusion on the necessity of the operation toward subclassing the float type.
Indeed, the solution can be further simplified as follows:
In [1]: class Foo(float):
...: def __new__(cls, value, extra):
...: return super().__new__(cls, value)
...: def __init__(self, value, extra):
...: self.extra = extra
In [2]: foo = Foo(1,2)
...: print(str(foo))
1.0
In [3]: print(foo.extra)
2
the solution of cgogolin is right. it's like so with another immutable classes like int, str, ... But i will write:
class Foo(float):
def __new__(cls, value, extra):
return super().__new__(cls, value)
def __init__(self, value, extra):
float.__init__(value)
self.extra = extra
While you can handle initialization in the __new__ method, because it's always called before __init__ (or even instead of, if __new__'s returned object is other than an instance of the class), it's best practice decoupling object initizalization in __init__ and leaving __new__ only for object creation.
For instance, in that way you would be able to subclass Foo. (Furthermore, passing *args, **kwargs to __new__ will allow the subclass have any number of positional or named arguments.)
class Foo(float):
def __new__(cls, value, *args, **kwargs):
return super().__new__(cls, value)
def __init__(self, value, extra):
self.extra = extra
class SubFoo(Foo):
def __init__(self, value, extra, more):
super().__init__(value, extra)
self.more = more
However, if you handle initialization in __new__ you will inherit object's __init__ which hasn't more arguments than the instance itself. And you won't be able to subclass it by the common way.
class Bar(float):
def __new__(cls, value, extra):
self = super().__new__(cls, value)
self.extra = extra
return self
class SubBar(Bar):
def __init__(self, value, extra):
super().__init__(value, extra)
>>> sub_bar = SubBar(1, 2)
TypeError: object.__init__() takes no parameters
You can do this without implementing __init__ at all:
class Foo(float):
def __new__(cls, value, extra):
instance = super().__new__(cls, value)
instance.extra = extra
return instance
In use:
>>> foo = Foo(1, 2)
>>> print(foo)
1.0
>>> print(foo.extra)
2

how do you automate setting self attributes in init of python class?

I'd like to automate unpacking init variables once and for all. My thought process is to unpack all and use setattr with the name of the variable as the attribute name.
class BaseObject(object):
def __init__(self, *args):
for arg in args:
setattr(self, argname, arg)
class Sub(BaseObject):
def __init__(self, argone, argtwo, argthree, argfour):
super(Sub, self).__init__(args*)
Then you should be able to do:
In [3]: mysubclass = Sub('hi', 'stack', 'overflow', 'peeps')
In [4]: mysubclass.argtwo
Out[4]: 'stack'
Based on the param for 'stack' having been named argtwo.
This way you'll automatically have access but you could still override like below:
class Sub(BaseObject):
def __init__(self, argone, argtwo, argthree, argfour):
super(Sub, self).__init__(arglist)
clean_arg_three = process_arg_somehow(argthree)
self.argthree = clean_arg_three
Clearly I'm stuck on how to pass the actual name of the param (argone, argtwo, etc) as a string to setattr, and also how to properly pass the args into the super init (the args of super(Sub, self).__init__(args*))
Thank you
Use kwargs instead of args
class BaseObject(object):
def __init__(self, **kwargs):
for argname, arg in kwargs.iteritems():
setattr(self, argname, arg)
class Sub(BaseObject):
def __init__(self, argone, argtwo, argthree, argfour):
super(Sub, self).__init__(argone=argone, argtwo=argtwo, argthree=argthree)
Then you can do:
s = Sub('a', 'b', 'c')
BaseObject.__init__() needs to discover the names of the arguments to Sub.__init__() somehow. The explicit approach would be to pass a dictionary of arguments to BaseObject.__init__(), as Uri Shalit suggested. Alternatively, you can use the inspect module to do this magically, with no extra code in your subclass. That comes with the usual downsides of magic ("how'd that happen?").
import inspect
class BaseObject(object):
def __init__(self):
frame = inspect.stack()[1][0]
args, _, _, values = inspect.getargvalues(frame)
for key in args:
setattr(self, key, values[key])
class Sub(BaseObject):
def __init__(self, argone, argtwo, argthree, argfour):
super(Sub, self).__init__()
This works fine as written, but if you don't define Sub.__init__(), this will grab the arguments from the wrong place (i.e., from the function where you call Sub() to create an object). You might be able to use inspect to doublecheck that the caller is an __init__() function of a subclass of BaseObject. Or you could just move this code to a separate method, e.g., set_attributes_from_my_arguments() and call that from your subclasses' __init__() methods when you want this behavior.
import inspect
class BaseObject(object):
def __init__(self, args):
del args['self']
self.__dict__.update(args)
class Sub(BaseObject):
def __init__(self, argone, argtwo, argthree, argfour):
args = inspect.getargvalues(inspect.currentframe()).locals
super(Sub, self).__init__(args)
s = Sub(1, 2, 3, 4)

setattr and getattr with methods

I have a boiler platey class that delegates some actions to a reference class. It looks like this:
class MyClass():
def __init__(self, someClass):
self.refClass = someClass
def action1(self):
self.refClass.action1()
def action2(self):
self.refClass.action2()
def action3(self):
self.refClass.action3()
This is the refClass:
class RefClass():
def __init__(self):
self.myClass = MyClass(self)
def action1(self):
#Stuff to execute action1
def action2(self):
#Stuff to execute action2
def action3(self):
#Stuff to execute action3
I'd like to use Python Metaprogramming to make this more elegant and readable, but I'm not sure how.
I've heard of setattr and getattr, and I think I could do something like
class MyClass():
def __init__(self, someClass):
self.refClass = someClass
for action in ['action1', 'action2', 'action3']:
def _delegate(self):
getattr(self.refClass, action)()
And then I know I need to do this from somewhere, I guess:
MyClass.setattr(action, delegate)
I just can't totally grasp this concept. I understand the basics about not repeating code, and generating the methods with a for loop with functional programming, but then I don't know how to call this methods from elsewhere. Heeeelp!
Python already includes support for generalized delegation to a contained class. Just change the definition of MyClass to:
class MyClass:
def __init__(self, someClass):
self.refClass = someClass # Note: You call this someClass, but it's actually some object, not some class in your example
def __getattr__(self, name):
return getattr(self.refClass, name)
When defined, __getattr__ is called on the instance with the name of the accessed attribute any time an attribute is not found on the instance itself. You then delegate to the contained object by calling getattr to look up the attribute on the contained object and return it. This costs a little each time to do the dynamic lookup, so if you want to avoid it, you can lazily cache attributes when they're first requested by __getattr__, so subsequent access is direct:
def __getattr__(self, name):
attr = getattr(self.refClass, name)
setattr(self, name, attr)
return attr
Personally, for delegating things I usually do something like that:
def delegate(prop_name, meth_name):
def proxy(self, *args, **kwargs):
prop = getattr(self, prop_name)
meth = getattr(prop, meth_name)
return meth(*args, **kwargs)
return proxy
class MyClass(object):
def __init__(self, someClass):
self.refClass = someClass
action1 = delegate('refClass', 'action1')
action2 = delegate('refClass', 'action2')
This will create all delegate methods you need :)
For some explanations, the delegate function here just create a "proxy" function which will act as a class method (see the self argument?) and will pass all arguments given to it to the referenced object's method with the args and kwargs arguments (see *args and **kwargs? for more informations about these arguments)
You can create this with a list too, but I prefer the first because it's more explicit for me :)
class MyClass(object):
delegated_methods = ['action1', 'action2']
def __init__(self, someClass):
self.refClass = someClass
for meth_name in self.delegated_methods:
setattr(self, meth_name, delegate('refClass', meth_name))

python: get constructor to return an existing object instead of a new one

I have a class that knows its existing instances. Sometimes I want the class constructor to return an existing object instead of creating a new one.
class X:
def __new__(cls, arg):
i = f(arg)
if i:
return X._registry[i]
else:
return object.__new__(cls)
# more stuff here (such as __init_, _registry, etc.)
Of course, if the first branch is executed, I don't need __init__, but it's invoked anyways. What's a good way to tell __init__ to do nothing?
I can probably just add some attribute to keep track of whether __init__ has run yet, but perhaps there's a better way?
In languages that support private constructors (C#, Dart, Scala, etc), factory methods provide a robust solution to this problem.
In Python, however, class constructors are always accessible, and so a user of your class may easily forget the factory method and call the constructor directly, producing duplicate copies of objects that should be unique.
A fool-proof solution to this problem can be achieved using a metaclass. The example below assumes that the zeroth constructor argument can be used to uniquely identify each instance:
class Unique(type):
def __call__(cls, *args, **kwargs):
if args[0] not in cls._cache:
self = cls.__new__(cls, *args, **kwargs)
cls.__init__(self, *args, **kwargs)
cls._cache[args[0]] = self
return cls._cache[args[0]]
def __init__(cls, name, bases, attributes):
super().__init__(name, bases, attributes)
cls._cache = {}
It can be used as follows:
class Country(metaclass=Unique):
def __init__(self, name: str, population: float, nationalDish: str):
self.name = name
self.population = population
self.nationalDish = nationalDish
placeA = Country("Netherlands", 16.8e6, "Stamppot")
placeB = Country("Yemen", 24.41e6, "Saltah")
placeC = Country("Netherlands", 11, "Children's tears")
print(placeA is placeB) # -> False
print(placeA is placeC) # -> True
print(placeC.nationalDish) # -> Stamppot
In summary, this approach is useful if you want to produce a set of unique objects at runtime (possibly using data in which entries may be repeated).
Use a factory, i.e.
_x_singleton = None
def XFactory():
global _x_singleton
if _x_singleton is None:
_x_singleton = X()
return _x_singleton
or use a "create" classmethod in your class that behaves the way you want it to,
class X(object):
instance = None
def __init__(self):
# ...
#classmethod
def create(cls):
if cls.instance is None:
cls.instance = cls()
return cls.instance
You might even consider making __init__ raise an exception if some condition isn't met (i.e. self.instance not being None)

Categories