werkzeug's LocalProxy.__local, where is it initialized? - python

I am curious about how LocalProxy from the werkzeug package works. Specifically, where is the __local field initialized?
#implements_bool
class LocalProxy(object):
__slots__ = ("__local", "__dict__", "__name__", "__wrapped__")
def __init__(self, local, name=None):
object.__setattr__(self, "_LocalProxy__local", local)
object.__setattr__(self, "__name__", name)
if callable(local) and not hasattr(local, "__release_local__"):
object.__setattr__(self, "__wrapped__", local)
def _get_current_object(self):
if not hasattr(self.__local, "__release_local__"):
return self.__local()
try:
return getattr(self.__local, self.__name__)
except AttributeError:
raise RuntimeError("no object bound to %s" % self.__name__)
...
There are no other places in LocalProxy class definition that reference self.__local, and it seems to me that self.__local is not initialized anywhere. Is it somehow magically aliased to self._LocalProxy__local?

What you call aliasing is called name mangling in Python.
Given this example:
class Customer:
def __init__(self, name):
self.__name = name
name is then available as Customer._customer__name.
While this is really rarely used, the intention is to make the access a bit harder. For whatever reason you want this.
The other interesting part of your code example is called slots.
This is a way to define/initialize attributes in an advanced way. It helps reducing memory and it prevents new attributes to be dynamically defined.

Related

What is the corretly way to call super in dynamically added methods?

I defined a metaclass which add a method named "test" to the created classes:
class FooMeta(type):
def __new__(mcls, name, bases, attrs):
def test(self):
return super().test()
attrs["test"] = test
cls = type.__new__(mcls, name, bases, attrs)
return cls
Then I create two classes using this Metaclass
class A(metaclass=FooMeta):
pass
class B(A):
pass
When I run
a = A()
a.test()
a TypeError is raised at super().test():
super(type, obj): obj must be an instance or subtype of type
Which means super() cannot infer the parent class correctly. If I change the super call into
def __new__(mcls, name, bases, attrs):
def test(self):
return super(cls, self).test()
attrs["test"] = test
cls = type.__new__(mcls, name, bases, attrs)
return cls
then the raised error becomes:
AttributeError: 'super' object has no attribute 'test'
which is expected as the parent of A does not implement test method.
So my question is what is the correct way to call super() in a dynamically added method? Should I always write super(cls, self) in this case? If so, it is too ugly (for python3)!
Parameterless super() is very special in Python because it triggers some behavior during code compilation time itself: Python creates an invisible __class__ variable which is a reference to the "physical" class statement body were the super() call is embedded (it also happens if one makes direct use of the __class__ variable inside a class method).
In this case, the "physical" class where super() is called is the metaclass FooMeta itself, not the class it is creating.
The workaround for that is to use the version of super which takes 2 positional arguments: the class in which it will search the immediate superclass, and the instance itself.
In Python 2 and other occasions one may prefer the parameterized use of super, it is normal to use the class name itself as the first parameter: at runtime, this name will be available as a global variable in the current module. That is, if class A would be statically coded in the source file, with a def test(...): method, you would use super(A, self).test(...) inside its body.
However, although the class name won't be available as a variable in the module defining the metaclass, you really need to pass a reference to the class as the first argument to super. Since the (test) method receives self as a reference to the instance, its class is given by either self.__class__ or type(self).
TL;DR: just change the super call in your dynamic method to read:
class FooMeta(type):
def __new__(mcls, name, bases, attrs):
def test(self):
return super(type(self), self).test()
attrs["test"] = test
cls = type.__new__(mcls, name, bases, attrs)
return cls

__call__ from metaclass shadows signature of __init__

I would like to have in the code underneath that when i type instance_of_A = A(, that the name of the supposed arguments is init_argumentA and not *meta_args, **meta_kwargs. But unfortunatally, the arguments of the __call__ method of the metaclass are shown.
class Meta(type):
def __call__(cls,*meta_args,**meta_kwargs):
# Something here
return super().__call__(*meta_args, **meta_kwargs)
class A(metaclass = Meta):
def __init__(self,init_argumentA):
# something here
class B(metaclass = Meta):
def __init__(self,init_argumentB):
# something here
I have searched for a solution and found the question How to dynamically change signatures of method in subclass?
and Signature-changing decorator: properly documenting additional argument. But none, seem to be completely what I want. The first link uses inspect to change the amount of variables given to a function, but i can't seem to let it work for my case and I think there has to be a more obvious solution.
The second one isn't completely what I want, but something in that way might be a good alternative.
Edit: I am working in Spyder. I want this because I have thousands of classes of the Meta type and each class have different arguments, which is impossible to remember, so the idea is that the user can remember it when seeing the correct arguments show up.
Using the code you provided, you can change the Meta class
class Meta(type):
def __call__(cls, *meta_args, **meta_kwargs):
# Something here
return super().__call__(*meta_args, **meta_kwargs)
class A(metaclass=Meta):
def __init__(self, x):
pass
to
import inspect
class Meta(type):
def __call__(cls, *meta_args, **meta_kwargs):
# Something here
# Restore the signature of __init__
sig = inspect.signature(cls.__init__)
parameters = tuple(sig.parameters.values())
cls.__signature__ = sig.replace(parameters=parameters[1:])
return super().__call__(*meta_args, **meta_kwargs)
Now IPython or some IDE will show you the correct signature.
I found that the answer of #johnbaltis was 99% there but not quite what was needed to ensure the signatures were in place.
If we use __init__ rather than __call__ as below we get the desired behaviour
import inspect
class Meta(type):
def __init__(cls, clsname, bases, attrs):
# Restore the signature
sig = inspect.signature(cls.__init__)
parameters = tuple(sig.parameters.values())
cls.__signature__ = sig.replace(parameters=parameters[1:])
return super().__init__(clsname, bases, attrs)
def __call__(cls, *args, **kwargs):
super().__call__(*args, **kwargs)
print(f'Instanciated: {cls.__name__}')
class A(metaclass=Meta):
def __init__(self, x: int, y: str):
pass
which will correctly give:
In [12]: A?
Init signature: A(x: int, y: str)
Docstring: <no docstring>
Type: Meta
Subclasses:
In [13]: A(0, 'y')
Instanciated: A
Ok - even though the reason for you to want that seems to be equivocated, as any "honest" Python inspecting tool should show the __init__ signature, what is needed for what you ask is that for each class you generate a dynamic metaclass, for which the __call__ method has the same signature of the class's own __init__ method.
For faking the __init__ signature on __call__ we can simply use functools.wraps. (but you might want to check the answers at
https://stackoverflow.com/a/33112180/108205 )
And for dynamically creating an extra metaclass, that can be done on the __metaclass__.__new__ itself, with just some care to avoud infinite recursion on the __new__ method - threads.Lock can help with that in a more consistent way than a simple global flag.
from functools import wraps
creation_locks = {}
class M(type):
def __new__(metacls, name, bases, namespace):
lock = creation_locks.setdefault(name, Lock())
if lock.locked():
return super().__new__(metacls, name, bases, namespace)
with lock:
def __call__(cls, *args, **kwargs):
return super().__call__(*args, **kwargs)
new_metacls = type(metacls.__name__ + "_sigfix", (metacls,), {"__call__": __call__})
cls = new_metacls(name, bases, namespace)
wraps(cls.__init__)(__call__)
del creation_locks[name]
return cls
I initially thought of using a named parameter to the metaclass __new__ argument to control recursion, but then it would be passed to the created class' __init_subclass__ method (which will result in an error) - so the Lock use.
Not sure if this helps the author but in my case I needed to change inspect.signature(Klass) to inspect.signature(Klass.__init__) to get signature of class __init__ instead of metaclass __call__.

Dynamically load module with Inheritance

I know that there are several posts on this topic, however for what ever reason I can't get my head around it, or at least implement it. Below is some sample code of what I am trying to do.
Base Class:
class Animal(object):
def __init__(self, age):
self._age = age
def getAge(self):
return self._age
def speak(self):
raise NotImplementedError()
def speak_twice(self):
self.speak()
self.speak()
Sub Class
from Animal import Animal
class Dog(Animal):
def speak(self):
print "woff!"
Test Code
mod = __import__("Dog")
spot = mod(5)
After running test Code I get this error:
Traceback (most recent call last):
File "C:~test.py", line 2, in <module>
spot = mod(5)
TypeError: 'module' object is not callable
So basically my question is how do I load modules dynamically and initialize them correctly?
EDIT:
I will not know the subclass until runtime
You have to import the module itself, then get its class member. You can't just import the class. Assuming your subclass is in a file accessible from the pythonpath as 'animal':
mod = __import__('animal')
spot = mod.Dog(5)
When you import a module, the interpreter first looks to see if a module with that name exists in sys.modules, then if it fails to find it there, it searches over the pythonpath looking for a package or module matching the given name. If and when it finds one, it parses the code therein, builds a module object out of it, places it on sys.modules, and returns the module object to the calling scope to be bound to the name it was imported with in the given namespace. All the items in the module (classes, variables, functions) in the module scope (not nested inside something else in the code) are then available as members of that module instance.
Edit:
In response to your comment, the real problem is that you are trying to look up an attribute of the module dynamically, not that you are trying to import anything dynamically. The most direct way to do that would be:
import sub_animal
getattr(sub_animal, 'Dog')
However, if you are trying to dynamically determine the class to initialize based upon some conditions, you probably want to read up on the factory pattern, and possibly decorators or even metaclasses, so that you can dynamically add subclasses automatically to the factory.
class AnimalFactory(type):
animal_classes = {}
def __new__(cls, name, bases, attrs):
new_class = super(AnimalFactory, cls).__new__(cls, name, bases, attrs)
AnimalFactory.animal_classes[name] = new_class
return new_class
#classmethod
def build(cls, name, *args, **kwargs):
try:
klass = cls.animal_classes[name]
except KeyError:
raise ValueError('No known animal %s' % name)
return klass(*args, **kwargs)
class Animal(object):
__metaclass__ = AnimalFactory
def __init__(self, age):
self.age = age
def speak(self):
raise NotImplementedError()
# As long as the file it is implemented in is imported at some point,
# the following can be anywhere
class Dog(Animal):
def speak(self):
return 'woof'
# And then to use, again, anywhere
new_animal = AnimalFactory.build('Dog', 5)

Deprecate usage of a class as a parent class in Python

I'm working with a Python 2.x framework, and a recent version of the framework has moved some widely used base classes from module A to module B (and the classes have been renamed to a clearer names in the process). Module A defines a backward compatible identifiers for the new class names.
B.py:
class BaseClass(object):
__metaclass__ = framework_meta # handles registration etc.
A.py:
import B
oldbase = B.BaseClass
Now in order to help people migrate their code, I would like to be able to issue a DeprecationWarning (using warnings.warn) whenever code using the framework defines a class deriving from A.oldbase telling the programmer to directly inherit from B.BaseClass.
I expect this can be achieved with a metaclass. I tried to declare a new metaclass deriving from the framework metaclass
class deprecated_base_class(framework_meta):
def __new__(meta, name, bases, attrs):
warning = '%(class)s is deprecated'
for b in bases:
warning = getattr(b, '__deprecation_warning__', None) or warning
warn(warning % {'class': name}, DeprecationWarning, stacklevel=2)
return super(deprecated_base_class, meta).__new__(meta, name, bases, attrs)
together with:
A.py:
class oldbase(B.BaseClass):
__metaclass__ = deprecated_base_class
__deprecation_warning__ = 'class oldbase is deprecated. Use B.BaseClass instead'
clientcode.py
class FooBar(oldbase):
pass
The problem I have now, is that I get a DeprecationWarning for the definition of oldbase. How can I fix this?
You want to display the warning if any of the bases are deprecated:
class deprecated_base_class(framework_meta):
def __new__(meta, name, bases, attrs):
for b in bases:
if isinstance(b, deprecated_base_class):
warning = getattr(b, '__deprecation_warning__', '%(class)s is deprecated')
warn(warning % {'class': b.__name__}, DeprecationWarning, stacklevel=2)
return super(deprecated_base_class, meta).__new__(meta, name, bases, attrs)

Metaclasses in Python: a couple of questions to clarify

After crashing with metaclasses i delved into the topic of metaprogramming in Python and I have a couple of questions that are, imho, not clearly anwered in available docs.
When using both __new__ and __init__ in a metaclass, their arguments must be defined the same?
What's most efficient way to define class __init__ in a metaclass?
Is there any way to refer to class instance (normally self) in a metaclass?
When using both __new__ and __init__
in a metaclass, their arguments must
be defined the same?
I think Alex Martelli explains
it most succinctly:
class Name(Base1,Base2): <<body>>
__metaclass__==suitable_metaclass
means
Name = suitable_metaclass('Name', (Base1,Base2), <<dict-built-by-body>>)
So stop thinking about
suitable_metaclass as a metaclass
for a moment and just regard it as a
class. Whenever you see
suitable_metaclass('Name', (Base1,Base2), <<dict-built-by-body>>)
it tells you that
suitable_metaclass's __new__
method must have a signature
something like
def __new__(metacls, name, bases, dct)
and a __init__ method like
def __init__(cls, name, bases, dct)
So the signatures are not exactly the same, but they differ only in the first argument.
What's most efficient way to define
class __init__ in a metaclass?
What do you mean by efficient? It is
not necessary to define the __init__
unless you want to.
Is there any way to refer to class
instance (normally self) in a
metaclass?
No, and you should not need to.
Anything that depends on the class
instance should be dealt with in the
class definition, rather than in the
metaclass.
For 1: The __init__ and __new__ of any class have to accept the same arguments, because they would be called with the same arguments. It's common for __new__ to take more arguments that it ignores (e.g. object.__new__ takes any arguments and it ignores them) so that __new__ doesn't have to be overridden during inheritance, but you usually only do that when you have no __new__ at all.
This isn't a problem here, because as it was stated, metaclasses are always called with the same set of arguments always so you can't run into trouble. With the arguments at least. But if you're modifying the arguments that are passed to the parent class, you need to modify them in both.
For 2: You usually don't define the class __init__ in a metaclass. You can write a wrapper and replace the __init__ of the class in either __new__ or __init__ of the metaclass, or you can redefine the __call__ on the metaclass. The former would act weirdly if you use inheritance.
import functools
class A(type):
def __call__(cls, *args, **kwargs):
r = super(A, cls).__call__(*args, **kwargs)
print "%s was instantiated" % (cls.__name__, )
print "the new instance is %r" % (r, )
return r
class B(type):
def __init__(cls, name, bases, dct):
super(B, cls).__init__(name, bases, dct)
if '__init__' not in dct:
return
old_init = dct['__init__']
#functools.wraps(old_init)
def __init__(self, *args, **kwargs):
old_init(self, *args, **kwargs)
print "%s (%s) was instantiated" % (type(self).__name__, cls.__name__)
print "the new instance is %r" % (self, )
cls.__init__ = __init__
class T1:
__metaclass__ = A
class T2:
__metaclass__ = B
def __init__(self):
pass
class T3(T2):
def __init__(self):
super(T3, self).__init__()
And the result from calling it:
>>> T1()
T1 was instantiated
the new instance is <__main__.T1 object at 0x7f502c104290>
<__main__.T1 object at 0x7f502c104290>
>>> T2()
T2 (T2) was instantiated
the new instance is <__main__.T2 object at 0x7f502c0f7ed0>
<__main__.T2 object at 0x7f502c0f7ed0>
>>> T3()
T3 (T2) was instantiated
the new instance is <__main__.T3 object at 0x7f502c104290>
T3 (T3) was instantiated
the new instance is <__main__.T3 object at 0x7f502c104290>
<__main__.T3 object at 0x7f502c104290>
For 3: Yes, from __call__ as shown above.

Categories