Promote instantiated class/object to a class in python? - python

Is there are a way in Python to store instantiated class as a class 'template' (aka promote object to a class) to create new objects of same type with same fields values, without relying on using data that was used to create original object again or on copy.deepcopy?
Like, for example I have the dictionary:
valid_date = {"date":"30 february"} # dict could have multiple items
and I have the class:
class AwesomeDate:
def __init__(self, dates_dict):
for key, val in dates_dict.items():
setattr(self, key, val);
I create the instance of the class like:
totally_valid_date = AwesomeDate(valid_date)
print(totally_valid_date.date) # output: 30 february
and now I want to use it to create new instances of the AwesomeDate class using the totally_valid_date instance as a template, i.e. like:
how_make_it_work = totally_valid_date()
print(how_make_it_work.date) # should print: 30 february
Is there are way to do so or no? I need a generic solution, not a solution for this specific example.

I don't really see the benefit of having a class act both as a template to instances, and as the instance itself, both conceptually and coding-wise. In my opinion, you're better off using two different classes - one for the template, one for the objects it is able to create.
You can think about awesome_date as a template class that stores the valid_date attributes upon initialization. Once called, the template returns an instance of a different class that has the expected attributes.
Here's a simple implementation (names have been changed to generalize the idea):
class Thing:
pass
class Template:
def __init__(self, template_attrs):
self.template_attrs = template_attrs
def __call__(self):
instance = Thing()
for key, val in self.template_attrs.items():
setattr(instance, key, val)
return instance
attrs = {'date': '30 february'}
template = Template(template_attrs=attrs)
# Gets instance of Thing
print(template()) # output: <__main__.Thing object at 0x7ffa656f8668>
# Gets another instance of Thing and accesses the date attribute
print(template().date) # output: 30 february

Yes, there are ways to do it -
there could even be some tweaking of inheriting from type and meddling with __call__ to make all instances automatically become derived classes. But I don't think that would be very sane. Python's own enum.Enum does something along this, because it has some use for the enum values - but the price is it became hard to understand beyond the basic usage, even for seasoned Pythonistas.
However, having a custom __init_subclass__ method that can inject some code to run prior to __init__ on the derived class, and then a method that will return a new class bound with the data that the new classes should have, can suffice:
import copy
from functools import wraps
def wrap_init(init):
#wraps(init)
def wrapper(self, *args, **kwargs):
if not getattr(self, "_initalized", False):
self.__dict__.update(self._template_data or {})
self._initialized = True
return init(self, *args, **kwargs)
wrapper._template_wrapper = True
return wrapper
class TemplateBase:
_template_data = None
def __init_subclass__(cls, *args, **kwargs):
super().__init_subclass__(*args, **kwargs)
if getattr(cls.__init__, "_template_wraper", False):
return
init = cls.__init__
cls.__init__ = wrap_init(init)
def as_class(self):
cls= self.__class__
new_cls = type(cls.__name__ + "_templated", (cls,), {})
new_cls._template_data = copy.copy(self.__dict__)
return new_cls
And using it:
class AwesomeDate(TemplateBase):
def __init__(self, dates_dict):
for key, val in dates_dict.items():
setattr(self, key, val)
On the REPL we have:
In [34]: x = AwesomeDate({"x":1, "y":2})
In [35]: Y = x.as_class()
In [36]: y = Y({})
In [37]: y.x
Out[37]: 1
Actually, __init_subclass__ itself could be supressed, and decorating __init__ could be done in one shot on the as_class method. This code takes some care so that mixin classes can be used, and it will still work.

It seems like you are going for something along the lines of the prototype design pattern.
What is the prototype design pattern?
From Wikipedia: Prototype pattern
The prototype pattern is a creational design pattern in software development. It is used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to avoid subclasses of an object creator in the client application, like the factory method pattern does and to avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword) when it is prohibitively expensive for a given application.
From Refactoring.guru: Prototype
Prototype is a creational design pattern that lets you copy existing objects without making your code dependent on their classes. The Prototype pattern delegates the cloning process to the actual objects that are being cloned. The pattern declares a common interface for all objects that support cloning. This interface lets you clone an object without coupling your code to the class of that object. Usually, such an interface contains just a single clone method.
The implementation of the clone method is very similar in all classes. The method creates an object of the current class and carries over all of the field values of the old object into the new one. You can even copy private fields because most programming languages let objects access private fields of other objects that belong to the same class. An object that supports cloning is called a prototype. When your objects have dozens of fields and hundreds of possible configurations, cloning them might serve as an alternative to subclassing. Here’s how it works: you create a set of objects, configured in various ways. When you need an object like the one you’ve configured, you just clone a prototype instead of constructing a new object from scratch.
Implementing this for your problem, along with your other ideas
From your explanation, it seems like you want to:
Provide a variable containing a dictionary, which will be passed to the __init__ of some class Foo
Instantiate class Foo and pass the variable containing the dictionary as an argument.
Implement __call__ onto class Foo, allowing us to use the function call syntax on an object of class Foo.
The implementation of __call__ will COPY/CLONE the “template” object. We can then do whatever we want with this copied/cloned instance.
The Code (edited)
import copy
class Foo:
def __init__(self, *, template_attrs):
if not isinstance(template_attrs, dict):
raise TypeError("You must pass a dict to instantiate this class.")
self.template_attrs = template_attrs
def __call__(self):
return copy.copy(self)
def __repr__(self):
return f"{self.template_attrs}"
def __setitem__(self, key, value):
self.template_attrs[key] = value
def __getitem__(self, key):
if key not in self.template_attrs:
raise KeyError(f"Key {key} does not exist in '{self.template_attrs=}'.")
return self.template_attrs[key]
err = Foo(template_attrs=1) # Output: TypeError: You must pass a dict to instantiate this class.
# remove err's assignment to have code under it run
base = Foo(template_attrs={1: 2})
print(f"{base=}") # Output: base={1: 2}
base_copy = base()
base_copy["hello"] = "bye"
print(f"{base_copy=}") # Output: base_copy={1: 2, 'hello': 'bye'}
print(f"{base_copy[1]=}") # Output: base_copy[1]=2
print(f"{base_copy[10]=}") # Output: KeyError: "Key 10 does not exist in 'self.template_attrs={1: 2, 'hello': 'bye'}'."
I also added support for subscripting and item assignment through __getitem__ and __setitem__ respectively. I hope that this helped a bit with your problem! Feel free to comment on this if I missed what you were asking.
Reasons for edits (May 16th, 2022 at 8:49 PM CST | Approx. 9 hours after original answer)
Fix code based on suggestions by comment from user jsbueno
Handle, in __getitem__, if an instance of class Foo is subscripted with a key that doesn't exist in the dict.
Handle, in __init__, if the type of template_attrs isn't dict (did this based on the fact that you used a dictionary in the body of your question)

Related

How to create a class which can be iniitalized by its own instance

Task:
Implement some class that accepts at least one argument and can be either initialized by original data, or its own instance.
Minimal example of usage:
arg = {} # whatever necessary for the real object
instance1 = NewClass(arg)
instance2 = NewClass(instance1)
assert instance2 is instance1 # or at least, ==
More complex example of usage:
from typing import Mapping, Union
class NewClass:
"""
Incomplete
Should somehow act like described in the task
"""
def __init__(self, data: Mapping):
self.data = data
def cool_method(self):
assert isinstance(self.data, Mapping)
# do smth with self.data
return ...
...
class AnotherClass:
"""
Accepts both mappings and NewClass instances,
but needs NewClass internally
"""
def __init__(self, obj: Union[Mapping, NewClass]):
self.cool = NewClass(obj).cool_method()
...
One just have to make use of the __new__ method on the class, instead of __init__ to be able to change what is instantiated.
In this case, all you need is to write your NewClass like this:
from typing import Union, Mapping, Self
class NewClass:
"""
acts like described in the task
"""
# typing.Self is available in Python 3.11.
# For previous versions, just put the class name quoted
# in a string: `"NewClass"` instead of `Self`
def __new__(cls, data: Union[Mapping, Self]):
if isinstance(data, NewClass):
return data
self = super().__new__(cls)
self.data = data
return self
def cool_method(self):
assert isinstance(self.data, Mapping)
# do smth with self.data
return ...
Avoiding a metaclass is interesting because it avoid metaclasses conflicts, in larger projects, and it is an abstraction level most
projects simply does not need. Actually, static type checkers such
as "Mypy" can't even figure out behavior changes coded into
the metaclasses.
On the other hand, __new__ is a common special method sibling to __init__, readily available, just not used more commonly because Python also provides
__init__, which suffices when the default behavior of __new__, of
always creating a new instance, is not the desired one.
For some reason I do not know, making use of a metaclass to create a "singleton" got wildly popular in tutorials and answers. It is a design pattern much less important and less used in Python than in languages which do not allow "stand alone" functions. Metaclasses are not needed for singletons either, by the way - one can just create a top-level instance of whatever class should have a single instance, and use that instance from that point on, instead of creating new instances. Other languages also restrict the existence of top-level, importable, instances, making that a need that was artificially imported into Python.
Metaclass solution:
Actual for python 3.8
class SelfWrapperMeta(type):
"""
Metaclass, allowing to return previously created user class instance,
if the user class init receives it as the first positional argument
Other arguments are just ignored in that self-wrapping case
Otherwise, the user class init calls normally
"""
def __call__(cls, arg, /, *args, **kwargs):
if isinstance(arg, cls):
return arg
return super().__call__(arg, *args, **kwargs)
Example of usage:
class A(metaclass=SelfWrapperMeta):
def __init__(self, data):
self.data = data
example = {}
a = A(example)
b = A(a)
c = A(example)
assert a is b
assert c is not a

How can I specialise instances of objects when I don't have access to the instantiation code?

Let's assume I am using a library which gives me instances of classes defined in that library when calling its functions:
>>> from library import find_objects
>>> result = find_objects("name = any")
[SomeObject(name="foo"), SomeObject(name="bar")]
Let's further assume that I want to attach new attributes to these instances. For example a classifier to avoid running this code every time I want to classify the instance:
>>> from library import find_objects
>>> result = find_objects("name = any")
>>> for row in result:
... row.item_class= my_classifier(row)
Note that this is contrived but illustrates the problem: I now have instances of the class SomeObject but the attribute item_class is not defined in that class and trips up the type-checker.
So when I now write:
print(result[0].item_class)
I get a typing error. It also trips up auto-completion in editors as the editor does not know that this attribute exists.
And, not to mention that this way of implementing this is quite ugly and hacky.
One thing I could do is create a subclass of SomeObject:
class ExtendedObject(SomeObject):
item_class = None
def classify(self):
cls = do_something_with(self)
self.item_class = cls
This now makes everything explicit, I get a chance to properly document the new attributes and give it proper type-hints. Everything is clean. However, as mentioned before, the actual instances are created inside library and I don't have control over the instantiation.
Side note: I ran into this issue in flask for the Response class. I noticed that flask actually offers a way to customise the instantiation using Flask.response_class. But I am still interested how this could be achieved in libraries that don't offer this injection seam.
One thing I could do is write a wrapper that does something like this:
class WrappedObject(SomeObject):
item_class = None
wrapped = None
#staticmethod
def from_original(wrapped):
self.wrapped = wrapped
self.item_class = do_something_with(wrapped)
def __getattribute__(self, key):
return getattr(self.wrapped, key)
But this seems rather hacky and will not work in other programming languages.
Or try to copy the data:
from copy import deepcopy
class CopiedObject(SomeObject):
item_class = None
#staticmethod
def from_original(wrapped):
for key, value in vars(wrapped):
setattr(self, key, deepcopy(value))
self.item_class = do_something_with(wrapped)
but this feels equally hacky, and is risky when the objects sue properties and/or descriptors.
Are there any known "clean" patterns for something like this?
I would go with a variant of your WrappedObject approach, with the following adjustments:
I would not extend SomeObject: this is a case where composition feels more appropriate than inheritance
With that in mind, from_original is unnecessary: you can have a proper __init__ method
item_class should be an instance variable and not a class variable. It should be initialized in your WrappedObject class constructor
Think twice before implementing __getattribute__ and forwarding everything to the wrapped object. If you need only a few method and attributes of the original SomeObject class, it might be better to implement them explicitly as methods and properties
class WrappedObject:
def __init__(self, wrapped):
self.wrapped = wrapped
self.item_class = do_something_with(wrapped)
def a_method(self):
return self.wrapped.a_method()
#property
def a_property(self):
return self.wrapped.a_property

Is this sound software engineering practice for class construction?

Is this a plausible and sound way to write a class where there is a syntactic sugar #staticmethod that is used for the outside to interact with? Thanks.
###scrip1.py###
import SampleClass.method1 as method1
output = method1(input_var)
###script2.py###
class SampleClass(object):
def __init__(self):
self.var1 = 'var1'
self.var2 = 'var2'
#staticmethod
def method1(input_var):
# Syntactic Sugar method that outside uses
sample_class = SampleClass()
result = sample_class._method2(input_var)
return result
def _method2(self, input_var):
# Main method executes the various steps.
self.var4 = self._method3(input_var)
return self._method4(self.var4)
def _method3(self):
pass
def _method4(self):
pass
Answering to both your question and your comment, yes it is possible to write such a code but I see no point in doing it:
class A:
def __new__(cls, value):
return cls.meth1(value)
def meth1(value):
return value + 1
result = A(100)
print(result)
# output:
101
You can't store a reference to a class A instance because you get your method result instead of an A instance. And because of this, an existing __init__will not be called.
So if the instance just calculates something and gets discarded right away, what you want is to write a simple function, not a class. You are not storing state anywhere.
And if you look at it:
result = some_func(value)
looks exactly to what people expect when reading it, a function call.
So no, it is not a good practice unless you come up with a good use case for it (I can't remember one right now)
Also relevant for this question is the documentation here to understand __new__ and __init__ behaviour.
Regarding your other comment below my answer:
defining __init__ in a class to set the initial state (attribute values) of the (already) created instance happens all the time. But __new__ has the different goal of customizing the object creation. The instance object does not exist yet when __new__is run (it is a constructor function). __new__ is rarely needed in Python unless you need things like a singleton, say a class A that always returns the very same object instance (of A) when called with A(). Normal user-defined classes usually return a new object on instantiation. You can check this with the id() builtin function. Another use case is when you create your own version (by subclassing) of an immutable type. Because it's immutable the value was already set and there is no way of changing the value inside __init__ or later. Hence the need to act before that, adding code inside __new__. Using __new__ without returning an object of the same class type (this is the uncommon case) has the addtional problem of not running __init__.
If you are just grouping lots of methods inside a class but there is still no state to store/manage in each instance (you notice this also by the absence of self use in the methods body), consider not using a class at all and organize these methods now turned into selfless functions in a module or package for import. Because it looks you are grouping just to organize related code.
If you stick to classes because there is state involved, consider breaking the class into smaller classes with no more than five to 7 methods. Think also of giving them some more structure by grouping some of the small classes in various modules/submodules and using subclasses, because a long plain list of small classes (or functions anyway) can be mentally difficult to follow.
This has nothing to do with __new__ usage.
In summary, use the syntax of a call for a function call that returns a result (or None) or for an object instantiation by calling the class name. In this case the usual is to return an object of the intended type (the class called). Returning the result of a method usually involves returning a different type and that can look unexpected to the class user. There is a close use case to this where some coders return self from their methods to allow for train-like syntax:
my_font = SomeFont().italic().bold()
Finally if you don't like result = A().method(value), consider an alias:
func = A().method
...
result = func(value)
Note how you are left with no reference to the A() instance in your code.
If you need the reference split further the assignment:
a = A()
func = a.method
...
result = func(value)
If the reference to A() is not needed then you probably don't need the instance too, and the class is just grouping the methods. You can just write
func = A.method
result = func(value)
where selfless methods should be decorated with #staticmethod because there is no instance involved. Note also how static methods could be turned into simple functions outside classes.
Edit:
I have setup an example similar to what you are trying to acomplish. It is also difficult to judge if having methods injecting results into the next method is the best choice for a multistep procedure. Because they share some state, they are coupled to each other and so can also inject errors to each other more easily. I assume you want to share some data between them that way (and that's why you are setting them up in a class):
So this an example class where a public method builds the result by calling a chain of internal methods. All methods depend on object state, self.offset in this case, despite getting an input value for calculations.
Because of this it makes sense that every method uses self to access the state. It also makes sense that you are able to instantiate different objects holding different configurations, so I see no use here for #staticmethod or #classmethod.
Initial instance configuration is done in __init__ as usual.
# file: multistepinc.py
def __init__(self, offset):
self.offset = offset
def result(self, value):
return self._step1(value)
def _step1(self, x):
x = self._step2(x)
return self.offset + 1 + x
def _step2(self, x):
x = self._step3(x)
return self.offset + 2 + x
def _step3(self, x):
return self.offset + 3 + x
def get_multi_step_inc(offset):
return MultiStepInc(offset).result
--------
# file: multistepinc_example.py
from multistepinc import get_multi_step_inc
# get the result method of a configured
# MultiStepInc instance
# with offset = 10.
# Much like an object factory, but you
# mentioned to prefer to have the result
# method of the instance
# instead of the instance itself.
inc10 = get_multi_step_inc(10)
# invoke the inc10 method
result = inc10(1)
print(result)
# creating another instance with offset=2
inc2 = get_multi_step_inc(2)
result = inc2(1)
print(result)
# if you need to manipulate the object
# instance
# you have to (on file top)
from multistepinc import MultiStepInc
# and then
inc_obj = MultiStepInc(5)
# ...
# ... do something with your obj, then
result = inc_obj.result(1)
print(result)
Outputs:
37
13
22

How to Inherit multiple classes in python dynamically [duplicate]

This article has a snippet showing usage of __bases__ to dynamically change the inheritance hierarchy of some Python code, by adding a class to an existing classes collection of classes from which it inherits. Ok, that's hard to read, code is probably clearer:
class Friendly:
def hello(self):
print 'Hello'
class Person: pass
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
That is, Person doesn't inherit from Friendly at the source level, but rather this inheritance relation is added dynamically at runtime by modification of the __bases__attribute of the Person class. However, if you change Friendly and Person to be new style classes (by inheriting from object), you get the following error:
TypeError: __bases__ assignment: 'Friendly' deallocator differs from 'object'
A bit of Googling on this seems to indicate some incompatibilities between new-style and old style classes in regards to changing the inheritance hierarchy at runtime. Specifically: "New-style class objects don't support assignment to their bases attribute".
My question, is it possible to make the above Friendly/Person example work using new-style classes in Python 2.7+, possibly by use of the __mro__ attribute?
Disclaimer: I fully realise that this is obscure code. I fully realize that in real production code tricks like this tend to border on unreadable, this is purely a thought experiment, and for funzies to learn something about how Python deals with issues related to multiple inheritance.
Ok, again, this is not something you should normally do, this is for informational purposes only.
Where Python looks for a method on an instance object is determined by the __mro__ attribute of the class which defines that object (the M ethod R esolution O rder attribute). Thus, if we could modify the __mro__ of Person, we'd get the desired behaviour. Something like:
setattr(Person, '__mro__', (Person, Friendly, object))
The problem is that __mro__ is a readonly attribute, and thus setattr won't work. Maybe if you're a Python guru there's a way around that, but clearly I fall short of guru status as I cannot think of one.
A possible workaround is to simply redefine the class:
def modify_Person_to_be_friendly():
# so that we're modifying the global identifier 'Person'
global Person
# now just redefine the class using type(), specifying that the new
# class should inherit from Friendly and have all attributes from
# our old Person class
Person = type('Person', (Friendly,), dict(Person.__dict__))
def main():
modify_Person_to_be_friendly()
p = Person()
p.hello() # works!
What this doesn't do is modify any previously created Person instances to have the hello() method. For example (just modifying main()):
def main():
oldperson = Person()
ModifyPersonToBeFriendly()
p = Person()
p.hello()
# works! But:
oldperson.hello()
# does not
If the details of the type call aren't clear, then read e-satis' excellent answer on 'What is a metaclass in Python?'.
I've been struggling with this too, and was intrigued by your solution, but Python 3 takes it away from us:
AttributeError: attribute '__dict__' of 'type' objects is not writable
I actually have a legitimate need for a decorator that replaces the (single) superclass of the decorated class. It would require too lengthy a description to include here (I tried, but couldn't get it to a reasonably length and limited complexity -- it came up in the context of the use by many Python applications of an Python-based enterprise server where different applications needed slightly different variations of some of the code.)
The discussion on this page and others like it provided hints that the problem of assigning to __bases__ only occurs for classes with no superclass defined (i.e., whose only superclass is object). I was able to solve this problem (for both Python 2.7 and 3.2) by defining the classes whose superclass I needed to replace as being subclasses of a trivial class:
## T is used so that the other classes are not direct subclasses of object,
## since classes whose base is object don't allow assignment to their __bases__ attribute.
class T: pass
class A(T):
def __init__(self):
print('Creating instance of {}'.format(self.__class__.__name__))
## ordinary inheritance
class B(A): pass
## dynamically specified inheritance
class C(T): pass
A() # -> Creating instance of A
B() # -> Creating instance of B
C.__bases__ = (A,)
C() # -> Creating instance of C
## attempt at dynamically specified inheritance starting with a direct subclass
## of object doesn't work
class D: pass
D.__bases__ = (A,)
D()
## Result is:
## TypeError: __bases__ assignment: 'A' deallocator differs from 'object'
I can not vouch for the consequences, but that this code does what you want at py2.7.2.
class Friendly(object):
def hello(self):
print 'Hello'
class Person(object): pass
# we can't change the original classes, so we replace them
class newFriendly: pass
newFriendly.__dict__ = dict(Friendly.__dict__)
Friendly = newFriendly
class newPerson: pass
newPerson.__dict__ = dict(Person.__dict__)
Person = newPerson
p = Person()
Person.__bases__ = (Friendly,)
p.hello() # prints "Hello"
We know that this is possible. Cool. But we'll never use it!
Right of the bat, all the caveats of messing with class hierarchy dynamically are in effect.
But if it has to be done then, apparently, there is a hack that get's around the "deallocator differs from 'object" issue when modifying the __bases__ attribute for the new style classes.
You can define a class object
class Object(object): pass
Which derives a class from the built-in metaclass type.
That's it, now your new style classes can modify the __bases__ without any problem.
In my tests this actually worked very well as all existing (before changing the inheritance) instances of it and its derived classes felt the effect of the change including their mro getting updated.
I needed a solution for this which:
Works with both Python 2 (>= 2.7) and Python 3 (>= 3.2).
Lets the class bases be changed after dynamically importing a dependency.
Lets the class bases be changed from unit test code.
Works with types that have a custom metaclass.
Still allows unittest.mock.patch to function as expected.
Here's what I came up with:
def ensure_class_bases_begin_with(namespace, class_name, base_class):
""" Ensure the named class's bases start with the base class.
:param namespace: The namespace containing the class name.
:param class_name: The name of the class to alter.
:param base_class: The type to be the first base class for the
newly created type.
:return: ``None``.
Call this function after ensuring `base_class` is
available, before using the class named by `class_name`.
"""
existing_class = namespace[class_name]
assert isinstance(existing_class, type)
bases = list(existing_class.__bases__)
if base_class is bases[0]:
# Already bound to a type with the right bases.
return
bases.insert(0, base_class)
new_class_namespace = existing_class.__dict__.copy()
# Type creation will assign the correct ‘__dict__’ attribute.
del new_class_namespace['__dict__']
metaclass = existing_class.__metaclass__
new_class = metaclass(class_name, tuple(bases), new_class_namespace)
namespace[class_name] = new_class
Used like this within the application:
# foo.py
# Type `Bar` is not available at first, so can't inherit from it yet.
class Foo(object):
__metaclass__ = type
def __init__(self):
self.frob = "spam"
def __unicode__(self): return "Foo"
# … later …
import bar
ensure_class_bases_begin_with(
namespace=globals(),
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
Use like this from within unit test code:
# test_foo.py
""" Unit test for `foo` module. """
import unittest
import mock
import foo
import bar
ensure_class_bases_begin_with(
namespace=foo.__dict__,
class_name=str('Foo'), # `str` type differs on Python 2 vs. 3.
base_class=bar.Bar)
class Foo_TestCase(unittest.TestCase):
""" Test cases for `Foo` class. """
def setUp(self):
patcher_unicode = mock.patch.object(
foo.Foo, '__unicode__')
patcher_unicode.start()
self.addCleanup(patcher_unicode.stop)
self.test_instance = foo.Foo()
patcher_frob = mock.patch.object(
self.test_instance, 'frob')
patcher_frob.start()
self.addCleanup(patcher_frob.stop)
def test_instantiate(self):
""" Should create an instance of `Foo`. """
instance = foo.Foo()
The above answers are good if you need to change an existing class at runtime. However, if you are just looking to create a new class that inherits by some other class, there is a much cleaner solution. I got this idea from https://stackoverflow.com/a/21060094/3533440, but I think the example below better illustrates a legitimate use case.
def make_default(Map, default_default=None):
"""Returns a class which behaves identically to the given
Map class, except it gives a default value for unknown keys."""
class DefaultMap(Map):
def __init__(self, default=default_default, **kwargs):
self._default = default
super().__init__(**kwargs)
def __missing__(self, key):
return self._default
return DefaultMap
DefaultDict = make_default(dict, default_default='wug')
d = DefaultDict(a=1, b=2)
assert d['a'] is 1
assert d['b'] is 2
assert d['c'] is 'wug'
Correct me if I'm wrong, but this strategy seems very readable to me, and I would use it in production code. This is very similar to functors in OCaml.
This method isn't technically inheriting during runtime, since __mro__ can't be changed. But what I'm doing here is using __getattr__ to be able to access any attributes or methods from a certain class. (Read comments in order of numbers placed before the comments, it makes more sense)
class Sub:
def __init__(self, f, cls):
self.f = f
self.cls = cls
# 6) this method will pass the self parameter
# (which is the original class object we passed)
# and then it will fill in the rest of the arguments
# using *args and **kwargs
def __call__(self, *args, **kwargs):
# 7) the multiple try / except statements
# are for making sure if an attribute was
# accessed instead of a function, the __call__
# method will just return the attribute
try:
return self.f(self.cls, *args, **kwargs)
except TypeError:
try:
return self.f(*args, **kwargs)
except TypeError:
return self.f
# 1) our base class
class S:
def __init__(self, func):
self.cls = func
def __getattr__(self, item):
# 5) we are wrapping the attribute we get in the Sub class
# so we can implement the __call__ method there
# to be able to pass the parameters in the correct order
return Sub(getattr(self.cls, item), self.cls)
# 2) class we want to inherit from
class L:
def run(self, s):
print("run" + s)
# 3) we create an instance of our base class
# and then pass an instance (or just the class object)
# as a parameter to this instance
s = S(L) # 4) in this case, I'm using the class object
s.run("1")
So this sort of substitution and redirection will simulate the inheritance of the class we wanted to inherit from. And it even works with attributes or methods that don't take any parameters.

what is the dict class used for

Can someone explain what the dict class is used for? This snippet is from Dive Into Python
class FileInfo(dict):
"store file metadata"
def __init__(self, filename=None):
self["name"] = filename
I understand the assignment of key=value pairs with self['name'] = filename but what does inheriting the dict class have to do with this? Please help me understand.
If you're not familiar with inheritance concept of object-oriented programming have a look at least at this wiki article (though, that's only for introduction and may be not for the best one).
In python we use this syntax to define class A as subclass of class B:
class A(B):
pass # empty class
In your example, as FileInfo class is inherited from standard dict type you can use instances of that class as dictionaries (as they have all methods that regular dict object has). Besides other things that allows you assign values by key like that (dict provides method for handing this operation):
self['name'] = filename
Is that the explanation you want or you don't understand something else?
It's for creating your own customized Dictionary type.
You can override __init__, __getitem__ and __setitem__ methods for your own special purposes to extend dictionary's usage.
Read the next section in the Dive into Python text: we use such inheritance to be able to work with file information just the way we do using a normal dictionary.
# From the example on the next section
>>> f = fileinfo.FileInfo("/music/_singles/kairo.mp3")
>>> f["name"]
'/music/_singles/kairo.mp3'
The fileinfo class is designed in a way that it receives a file name in its constructor, then lets the user get file information just the way you get the values from an ordinary dictionary.
Another usage of such a class is to create dictionaries which control their data. For example you want a dictionary who does a special thing when things are assigned to, or read from its 'sensor' key. You could define your special __setitem__ function which is sensitive with the key name:
def __setitem__(self, key, item):
self.data[key] = item
if key == "sensor":
print("Sensor activated!")
Or for example you want to return a special value each time user reads the 'temperature' key. For this you subclass a __getitem__ function:
def __getitem__(self, key):
if key == "temperature":
return CurrentWeatherTemperature()
else:
return self.data[key]
When an Class in Python inherits from another Class, it means that any of the methods defined on the inherited Class are, by nature, defined on the newly created Class.
So when FileInfo inherits dict it means all of the functionality of the dict class is now available to FileInfo, in addition to anything that FileInfo may declare, or more importantly, override by re-defining the method or parameter.
Since the dict Object in Python allows for key/value name pairs, this enables FileInfo to have access to that same mechanism.

Categories