replacing the "new" module - python

I have code which contains the following two lines in it:-
instanceMethod = new.instancemethod(testFunc, None, TestCase)
setattr(TestCase, testName, instanceMethod)
How could it be re-written without using the "new" module? Im sure new style classes provide some kind of workaround for this, but I am not sure how.

There is a discussion that suggests that in python 3, this is not required. The same works in Python 2.6
http://mail.python.org/pipermail/python-list/2009-April/531898.html
See:
>>> class C: pass
...
>>> c=C()
>>> def f(self): pass
...
>>> c.f = f.__get__(c, C)
>>> c.f
<bound method C.f of <__main__.C instance at 0x10042efc8>>
>>> c.f
<unbound method C.f>
>>>
Reiterating the question for every one's benefit, including mine.
Is there a replacement in Python3 for new.instancemethod? That is, given an arbitrary instance (not its class) how can I add a new appropriately defined function as a method to it?
So following should suffice:
TestCase.testFunc = testFunc.__get__(None, TestCase)

You can replace "new.instancemethod" by "types.MethodType":
from types import MethodType as instancemethod
class Foo:
def __init__(self):
print 'I am ', id(self)
def bar(self):
print 'hi', id(self)
foo = Foo() # prints 'I am <instance id>'
mm = instancemethod(bar, foo) # automatically uses foo.__class__
mm() # prints 'I have been bound to <same instance id>'
foo.mm # traceback because no 'field' created in foo to hold ref to mm
foo.mm = mm # create ref to bound method in foo
foo.mm() # prints 'I have been bound to <same instance id>'

This will do the same:
>>> Testcase.testName = testFunc
Yeah, it's really that simple.
Your line
>>> instanceMethod = new.instancemethod(testFunc, None, TestCase)
Is in practice (although not in theory) a noop. :) You could just as well do
>>> instanceMethod = testFunc
In fact, in Python 3 I'm pretty sure it would be the same in theory as well, but the new module is gone so I can't test it in practice.

To confirm that it's not needed to use new.instancemthod() at all since Python v2.4, here's an example how to replace an instance method. It's also not needed to use descriptors (even though it works).
class Ham(object):
def spam(self):
pass
h = Ham()
def fake_spam():
h._spam = True
h.spam = fake_spam
h.spam()
# h._spam should be True now.
Handy for unit testing.

Related

Monkey-patching class with inherited classes in Python

After reading the answers to the question about monkey-patching classes in Python I tried to apply the advised solution to the following case.
Imagine that we have a module a.py
class A(object):
def foo(self):
print(1)
class AA(A):
pass
and let us try to monkey patch it as follows. It works when we monkey patch class A:
>>> import a
>>> class B(object):
... def foo(self):
... print(3)
...
>>> a.A = B
>>> x = a.A()
>>> x.foo()
3
But if we try the inherited class, it turns to be not patched:
>>> y = a.AA()
>>> y.foo()
1
Is there any way to monkey patch the class with all its inherited classes?
EDIT
For now, the best solution for me is as follows:
>>> class AB(B, a.AA):
... pass
...
>>> a.AA = AB
>>> x = a.AA()
>>> x.foo()
3
Any complex structure of a.AA will be inherited and the only difference between AB and a.AA will be the foo() method. In this way, we don't modify any internal class attributes (like __base__ or __dict__). The only remaining drawback is that we need to do that for each of the inherited classes.
Is it the best way to do this?
You need to explicitly overwrite the tuple of base classes in a.AA, though I don't recommend modifying classes like this.
>>> import a
>>> class B:
... def foo(self):
... print(2)
...
>>> a.AA.__bases__ = (B,)
>>> a.AA().foo()
2
This will also be reflected in a.A.__subclasses__() (although I am not entirely sure as to how that works; the fact that it is a method suggests that it computes this somehow at runtime, rather than simply returning a value that was modified by the original definition of AA).
It appears that the bases classes in a class statement are simply remembered, rather than used, until some operation needs them (e.g. during attribute lookup). There may be some other subtle corner cases that aren't handled as smoothly: caveat programmator.

How to "wrap" object to automatically call superclass method instead of overriden ones?

Consider:
class A(object):
def f(self): print("A")
class B(A):
def f(self): print("B")
b = B()
I can call A.f on b by doing:
A.f(b)
Is there an easy way to "wrap" b such that wrap(b).f() calls A.f for any f?
Here is my solution which copies the methods from the most upper base class:
import types, copy
def get_all_method_names(clazz):
return [func for func in dir(clazz) if callable(getattr(clazz, func))]
def wrap(obj):
obj = copy.copy(obj)
obj_clazz = obj.__class__
base_clazz = obj_clazz.__bases__[-1] # the one which directly inherits from object
base_methods = get_all_method_names(base_clazz) # list of all method names in base_clazz
for base_method_name in base_methods:
base_method = getattr(base_clazz, base_method_name) # get the method object
if isinstance(base_method, types.FunctionType): # skip dunder methods like __class__, __init__
setattr(obj, base_method_name, base_method) # copy it into our object
return obj
# class declaration from question here
wrapped_b = wrap(b)
wrapped_b.f(wrapped_b) # prints A, unfortunately we have to pass the self parameter explicitly
b.f() # prints B, proof that the original object is untouched
This feels dirty to me, but it also seems to work. I'm not sure I'd rely on this for anything important.
import copy
def upcast(obj, clazz):
if not isinstance(obj, clazz): # make sure we're actually "upcasting"
raise TypeError()
wrapped = copy.copy(obj)
wrapped.__class__ = clazz
return wrapped
This results in
>>> a = A()
>>> a.f()
A
>>> b = B()
>>> b.f()
B
>>> upcast(b, A).f()
A
What I've really done here is essentially monkey-patch a clone of b and lied to it and told it it's actually an A, so when it comes time to resolve which version of f to call, it'll call the one from A.
Object Slicing is not supported in python the way it is done in C++ (The link you are pointing to takes a cpp example).
In Python Object Slicing is a rather different thing which means to slice up any object which supports sequence protocol (implements getitem() and len() methods).
Example :
A = [1,2,3,4,5,6,7,8]
print(A[1:3])
But in C++ Object Slicing is just cutting off the properties added by a base class instance when assigned to a parent class variable.

Python make function object subscriptable [duplicate]

I need to patch current datetime in tests. I am using this solution:
def _utcnow():
return datetime.datetime.utcnow()
def utcnow():
"""A proxy which can be patched in tests.
"""
# another level of indirection, because some modules import utcnow
return _utcnow()
Then in my tests I do something like:
with mock.patch('***.utils._utcnow', return_value=***):
...
But today an idea came to me, that I could make the implementation simpler by patching __call__ of function utcnow instead of having an additional _utcnow.
This does not work for me:
from ***.utils import utcnow
with mock.patch.object(utcnow, '__call__', return_value=***):
...
How to do this elegantly?
When you patch __call__ of a function, you are setting the __call__ attribute of that instance. Python actually calls the __call__ method defined on the class.
For example:
>>> class A(object):
... def __call__(self):
... print 'a'
...
>>> a = A()
>>> a()
a
>>> def b(): print 'b'
...
>>> b()
b
>>> a.__call__ = b
>>> a()
a
>>> a.__call__ = b.__call__
>>> a()
a
Assigning anything to a.__call__ is pointless.
However:
>>> A.__call__ = b.__call__
>>> a()
b
TLDR;
a() does not call a.__call__. It calls type(a).__call__(a).
Links
There is a good explanation of why that happens in answer to "Why type(x).__enter__(x) instead of x.__enter__() in Python standard contextlib?".
This behaviour is documented in Python documentation on Special method lookup.
[EDIT]
Maybe the most interesting part of this question is Why I cannot patch somefunction.__call__?
Because the function don't use __call__'s code but __call__ (a method-wrapper object) use function's code.
I don't find any well sourced documentation about that, but I can prove it (Python2.7):
>>> def f():
... return "f"
...
>>> def g():
... return "g"
...
>>> f
<function f at 0x7f1576381848>
>>> f.__call__
<method-wrapper '__call__' of function object at 0x7f1576381848>
>>> g
<function g at 0x7f15763817d0>
>>> g.__call__
<method-wrapper '__call__' of function object at 0x7f15763817d0>
Replace f's code by g's code:
>>> f.func_code = g.func_code
>>> f()
'g'
>>> f.__call__()
'g'
Of course f and f.__call__ references are not changed:
>>> f
<function f at 0x7f1576381848>
>>> f.__call__
<method-wrapper '__call__' of function object at 0x7f1576381848>
Recover original implementation and copy __call__ references instead:
>>> def f():
... return "f"
...
>>> f()
'f'
>>> f.__call__ = g.__call__
>>> f()
'f'
>>> f.__call__()
'g'
This don't have any effect on f function. Note: In Python 3 you should use __code__ instead of func_code.
I Hope that somebody can point me to the documentation that explain this behavior.
You have a way to work around that: in utils you can define
class Utcnow(object):
def __call__(self):
return datetime.datetime.utcnow()
utcnow = Utcnow()
And now your patch can work like a charm.
Follow the original answer that I consider even the best way to implement your tests.
I've my own gold rule: never patch protected methods. In this case the things are little bit smoother because protected method was introduced just for testing but I cannot see why.
The real problem here is that you cannot to patch datetime.datetime.utcnow directly (is C extension as you wrote in the comment above). What you can do is to patch datetime by wrap the standard behavior and override utcnow function:
>>> with mock.patch("datetime.datetime", mock.Mock(wraps=datetime.datetime, utcnow=mock.Mock(return_value=3))):
... print(datetime.datetime.utcnow())
...
3
Ok that is not really clear and neat but you can introduce your own function like
def mock_utcnow(return_value):
return mock.Mock(wraps=datetime.datetime,
utcnow=mock.Mock(return_value=return_value)):
and now
mock.patch("datetime.datetime", mock_utcnow(***))
do exactly what you need without any other layer and for every kind of import.
Another solution can be import datetime in utils and to patch ***.utils.datetime; that can give you some freedom to change datetime reference implementation without change your tests (in this case take care to change mock_utcnow() wraps argument too).
As commented on the question, since datetime.datetime is written in C, Mock can't replace attributes on the class (see Mocking datetime.today by Ned Batchelder). Instead you can use freezegun.
$ pip install freezegun
Here's an example:
import datetime
from freezegun import freeze_time
def my_now():
return datetime.datetime.utcnow()
#freeze_time('2000-01-01 12:00:01')
def test_freezegun():
assert my_now() == datetime.datetime(2000, 1, 1, 12, 00, 1)
As you mention, an alternative is to track each module importing datetime and patch them. This is in essence what freezegun does. It takes an object mocking datetime, iterates through sys.modules to find where datetime has been imported and replaces every instance. I guess it's arguable whether you can do this elegantly in one function.

What is the least-bad way to create Python classes at runtime?

I am working with an ORM that accepts classes as input and I need to be able to feed it some dynamically generated classes. Currently, I am doing something like this contrived example:
def make_cls(_param):
def Cls(object):
param = _param
return Cls
A, B = map(make_cls, ['A', 'B'])
print A().foo
print B().foo
While this works fine, it feels off by a bit: for example, both classes print as <class '__main__.Cls'> on the repl. While the name issue is not a big deal (I think I could work around it by setting __name__), I wonder if there are other things I am not aware of.
So my question is: is there a better way to create classes dynamically or is my example mostly fine already?
What is class? It is just an instance of type. For example:
>>> A = type('A', (object,), {'s': 'i am a member', 'double_s': lambda self: self.s * 2})
>>> a = A()
>>> a
<__main__.A object at 0x01229F50>
>>> a.s
'i am a member'
>>> a.double_s()
'i am a memberi am a member'
From the doc:
type(name, bases, dict)
Return a new type object. This is essentially a dynamic form of the class statement.

Difference between type(obj) and obj.__class__

What is the difference between type(obj) and obj.__class__? Is there ever a possibility of type(obj) is not obj.__class__?
I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing?
def f(a, b=None):
if b is None:
b = type(a)(1) # #1
b = a.__class__(1) # #2
This is an old question, but none of the answers seems to mention that. in the general case, it IS possible for a new-style class to have different values for type(instance) and instance.__class__:
class ClassA(object):
def display(self):
print("ClassA")
class ClassB(object):
__class__ = ClassA
def display(self):
print("ClassB")
instance = ClassB()
print(type(instance))
print(instance.__class__)
instance.display()
Output:
<class '__main__.ClassB'>
<class '__main__.ClassA'>
ClassB
The reason is that ClassB is overriding the __class__ descriptor, however the internal type field in the object is not changed. type(instance) reads directly from that type field, so it returns the correct value, whereas instance.__class__ refers to the new descriptor replacing the original descriptor provided by Python, which reads the internal type field. Instead of reading that internal type field, it returns a hardcoded value.
Old-style classes are the problem, sigh:
>>> class old: pass
...
>>> x=old()
>>> type(x)
<type 'instance'>
>>> x.__class__
<class __main__.old at 0x6a150>
>>>
Not a problem in Python 3 since all classes are new-style now;-).
In Python 2, a class is new-style only if it inherits from another new-style class (including object and the various built-in types such as dict, list, set, ...) or implicitly or explicitly sets __metaclass__ to type.
type(obj) and type.__class__ do not behave the same for old style classes:
>>> class a(object):
... pass
...
>>> class b(a):
... pass
...
>>> class c:
... pass
...
>>> ai=a()
>>> bi=b()
>>> ci=c()
>>> type(ai) is ai.__class__
True
>>> type(bi) is bi.__class__
True
>>> type(ci) is ci.__class__
False
There's an interesting edge case with proxy objects (that use weak references):
>>> import weakref
>>> class MyClass:
... x = 42
...
>>> obj = MyClass()
>>> obj_proxy = weakref.proxy(obj)
>>> obj_proxy.x # proxies attribute lookup to the referenced object
42
>>> type(obj_proxy) # returns type of the proxy
weakproxy
>>> obj_proxy.__class__ # returns type of the referenced object
__main__.MyClass
>>> del obj # breaks the proxy's weak reference
>>> type(obj_proxy) # still works
weakproxy
>>> obj_proxy.__class__ # fails
ReferenceError: weakly-referenced object no longer exists
FYI - Django does this.
>>> from django.core.files.storage import default_storage
>>> type(default_storage)
django.core.files.storage.DefaultStorage
>>> default_storage.__class__
django.core.files.storage.FileSystemStorage
As someone with finite cognitive capacity who's just trying to figure out what's going in order to get work done... it's frustrating.

Categories