Pass closure to FunctionType in function - python

I have a code like this:
class A():
def __init__(self, a):
self.a = a
def outer_method(self):
def inner_method():
return self.a +1
return inner_method()
I want to write a test for inner_method. For that, I am using a code like this:
def find_nested_func(parent, child_name):
"""
Return the function named <child_name> that is defined inside
a <parent> function
Returns None if nonexistent
"""
consts = parent.__code__.co_consts
item = list(filter(lambda x:isinstance(x, CodeType) and x.co_name==child_name, consts ))[0]
return FunctionType(item, globals())
Calling it with find_nested_func(A().outer_method, 'inner_method') but it fails when calling to 'FunctionType' because the function cannot be created since 'self.a' stops existing in the moment the function stops being an inner function. I know the construction FunctionType can recive as an argument a closure that could fix this problem , but I don't know how to use it. How can I pass it?
The error it gives is the next one:
return FunctionType(item, globals())
TypeError: arg 5 (closure) must be tuple

Why are you trying to test inner_method? In most cases, you should only test parts of your public API. outer_method is part of A's public API, so test just that. inner_method is an implementation detail that can change: what if you decide to rename it? what if you refactor it slightly without modifying the externally visible behavior of outer_method? Users of the class A have no (easy) way of calling inner_method. Unit tests are usually only meant to test things that users of your class can call (I'm assuming these are for unit tests, because integration tests this granular would be strange--and the same principle would still mostly hold).
Practically, you'll have a problem extracting functions defined within another function's scope, for several reasons include variable capture. You have no way of knowing if inner_method only captures self or if outer_method performs some logic and computes some variables that inner_method uses. For example:
class A:
def outer_method():
b = 1
def inner_method():
return self.a + b
return inner_method()
Additionally, you could have control statements around the function definition, so there is no way to decide which definition is used without running outer_method. For example:
import random
class A:
def outer_method():
if random.random() < 0.5:
def inner_method():
return self.a + 1
else:
def inner_method():
return self.a + 2
return inner_method()
You can't extract inner_method here because there are two of them and you don't know which is actually used until you run outer_method.
So, just don't test inner_method.
If inner_method is truly complex enough that you want to test it in isolation (and if you do so, principled testing says you should mock out its uses, eg. its use in outer_method), then just make it a "private-ish" method on A:
class A:
def _inner_method(self):
return self.a + 1
def outer_method(self):
return self._inner_method()
Principled testing says you really shouldn't be testing underscore methods, but sometimes necessity requires it. Doing this things way allows you test _inner_method just as you would any other method. Then, when testing outer_method, you could mock it out by doing a._inner_method = Mock() (where a is the A object under test).
Also, use class A. The parens are unnecessary unless you have parent classes.

Related

Python - Is it possible to define an instance method inside another instance method?

Is it possible to do something like this? (This syntax doesn't actually work)
class TestClass(object):
def method(self):
print 'one'
def dynamically_defined_method(self):
print 'two'
c = TestClass()
c.method()
c.dynamically_defined_method() #this doesn't work
If it's possible, is it terrible programming practice? What I'm really trying to do is to have one of two variations of the same method be called (both with identical names and signatures), depending on the state of the instance.
Defining the function in the method doesn't automatically make it visible to the instance--it's just a function that is scoped to live within the method.
To expose it, you'd be tempted to do:
self.dynamically_defined_method = dynamically_defined_method
Only that doesn't work:
TypeError: dynamically_defined_method() takes exactly 1 argument (0 given)
You have to mark the function as being a method (which we do by using MethodType). So the full code to make that happen looks like this:
from types import MethodType
class TestClass(object):
def method(self):
def dynamically_defined_method(self):
print "two"
self.dynamically_defined_method = MethodType(dynamically_defined_method, self)
c = TestClass()
c.method()
c.dynamically_defined_method()

Python - how to properly extend classes to modify the operation of base methods

I am using some open-source python code that I need to make a slight modification to. Here is the starting situation.
Starting Situation
class BaseGatherer:
def get_string(self):
#a bunch of stuff I don't need to modify
return self.my_string #set with a hard value in __init__()
class BaseDoer:
def do_it(self, some_gatherer):
#a bunch of stuff I don't need to modify
string_needed = some_gatherer.get_string()
self.do_stuff(string_needed)
Externally, this would get used as follows:
my_gatherer = BaseGatherer()
my_doer = BaseDoer()
my_doer.do_it(my_gatherer)
What I need to change
What I need to do is two things:
Have BaseGatherer::get_string() return it's my_string with an inserted modification that changes each time it gets called. For example, the first time it gets called I get my_string[0:1] + 'loc=0' + my_string[1:], the second time I get 'my_string[0:1] + 'loc=10' + my_string[1:], etc.
Modify BaseDoer::do_it() to call BaseDoer::do_stuff() in a loop (until some stopping condition), each time setting string_needed with some_gatherer.get_string() which by #1 returns a different string with each call.
with the following restriction
The base code I am using is regularly updated and I don't want to modify that code at all; I want to be able to clone the repo I get it from and only possibly have to modify my "extended" code. It's ok to assume the names of the BaseGatherer and BaseDoer classes don't change in the base code, nor do the names of the methods I care about here, though some auxiliary methods that I don't need to modify will get updated (which is key for me).
My Question
My main question is what is the best, most Pythonic, way to do this?
My Attempt
Given the restriction I mentioned, my first inclination is to write derived classes of both BaseGatherer and BaseDoer which make the changes I need by writing new versions of the get_string() and do_it() methods respectively. But, I have a feeling that I should use function decorators to do this. I've read up on them a little and I get the basic idea (they wrap a function so that you can control/modify parameters passed to or values returned from the function you are wrapping without modifying that function?; please correct me if I am missing something crucial). But, I don't know how to implement this in the derived class, neither syntactically nor logically. For example, do I have to give the function decorator that I write a #classmethod decorator? If so, why?
Here is what I did. It works but I want to learn and understand a) what is the right way to do what I want and b) how to actually do it.
class DerivedGatherer(BaseGatherer):
def __init__(self):
super(DerivedGatherer, self).__init__() #I'm in Python 2.7 :(
self.counter = 0 #new variable not in BaseGatherer
#DON'T override BaseGatherer::get_string(), just write a new function
def get_string_XL(self):
self.counter += 10
return self.my_string[0:1] + 'loc=' + str(self.counter) + self.my_string[1:]
class DerivedDoer(BaseDoer):
def do_it_XL(self, some_derived_gatherer):
while(~some_stop_condition()):
string_needed = some_derived_gatherer.get_string_XL()
self.do_stuff(string_needed)
I would then call it just as above but create derived instances and call their XL methods instead of the base class one's.
But, there are problems with this that don't satisfy my goal/requirement above: both BaseGatherer::get_string() and BaseDoer::do_it() perform a lot of other functionality which I would have to just copy into my new functions (see the comment in them). This means when the code I'm using gets updated I have to do the copying to update my derived classes. But, I am ONLY changing what you see here: inserting something into the string that changes with each call and putting a loop at the end of do_it(). This is why I have a feeling that function decorators are the way to go. That I can wrap both get_string() and do_it() so that for the former I can modify the string with each call by looking at a local counter and for the second I can just have a loop that calls the base do_it() in a loop and passes a different string each time (it's ok to call do_it() multiple times).
Any help with suggestions on the best way to do this and how would be very greatly appreciated.
Thank you!
This is most easily done with plain inheritance, no repetition of code:
class DerivedGatherer(BaseGatherer):
def __init__(self):
super(DerivedGatherer, self).__init__()
self.counter = 0
def get_string(self):
self.counter += 10
super_string = super(DerivedGatherer, self).get_string()
return super_string[0:1] + 'loc=' + str(self.counter) + super_string[1:]
class DerivedDoer(BaseDoer):
def do_it(self, some_derived_gatherer):
while(~some_stop_condition()):
super(DerivedDoer, self).do_it(some_derived_gatherer)
As Martijn has explained to you, you can easily call a method in the base class from a method in the subclass. You can do it with super to insure you get the entire class structure as he shows. For illustration, though, if you are have a simple hierarchy with only one superclass and one subclass, this is how you can do it:
>>> class Foo(object):
... def xxx(self):
... return "Foo.xxx was called"
...
>>> class Bar(Foo):
... def xxx(self):
... return "Bar.xxx was called and then %s" % Foo.xxx(self)
...
>>> Bar().xxx()
'Bar.xxx was called and then Foo.xxx was called'

How to unit-test decorated functions?

I tried lately to train myself a lot in unit-testing best practices. Most of it makes perfect sense, but there is something that is often overlooked and/or badly explained: how should one unit-test decorated functions ?
Let's assume I have this code:
def stringify(func):
#wraps(func)
def wrapper(*args):
return str(func(*args))
return wrapper
class A(object):
#stringify
def add_numbers(self, a, b):
"""
Returns the sum of `a` and `b` as a string.
"""
return a + b
I can obviously write the following tests:
def test_stringify():
#stringify
def func(x):
return x
assert func(42) == "42"
def test_A_add_numbers():
instance = MagicMock(spec=A)
result = A.add_numbers.__wrapped__(instance, 3, 7)
assert result == 10
This gives me 100% coverage: I know that any function that gets decorated with stringify() gets his result as a string, and I know that the undecorated A.add_numbers() function returns the sum of its arguments. So by transitivity, the decorated version of A.add_numbers() must return the sum of its argument, as a string. All seems good !
However I'm not entirely satisfied with this: my tests, as I wrote them could still pass if I were to use another decorator (that does something else, say multiply the result by 2 instead of casting to a str). My function A.add_numbers would not be correct anymore yet the tests would still pass. Not awesome.
I could test the decorated version of A.add_numbers() but then I would overtest things since my decorator is already unit-tested.
It feels like I'm missing something here. What is a good strategy to unit-test decorated functions ?
I ended up splitting my decorators in two. So instead of having:
def stringify(func):
#wraps(func)
def wrapper(*args):
return str(func(*args))
return wrapper
I have:
def to_string(value):
return str(value)
def stringify(func):
#wraps(func)
def wrapper(*args):
return to_string(func(*args))
return wrapper
Which allows me later to simply mock-out to_string when testing the decorated function.
Obviously in this simple example case it might seem overkill, but when used over a decorator that actually does something complex or expensive (like opening a connection to a DB, or whatever), being able to mock it out is a very nice thing.
Test the public interface of your code. If you only expect people to call the decorated functions, then that's what you should test. If the decorator is also public, then test that too (like you did with test_stringify()). Don't test the wrapped versions unless people are directly calling them.
One of the major benefits of unit testing is to allow refactoring with some degree of confidence that the refactored code continues to work the same as it did previously. Suppose you had started with
def add_numbers(a, b):
return str(a + b)
def mult_numbers(a, b):
return str(a * b)
You would have some tests like
def test_add_numbers():
assert add_numbers(3, 5) == "8"
def test_mult_numbers():
assert mult_numbers(3, 5) == "15"
Now, you decide to refactor the common parts of each function (wrapping the output in a string), using your stringify decorator.
def stringify(func):
#wraps(func)
def wrapper(*args):
return str(func(*args))
return wrapper
#stringify
def add_numbers(a, b):
return a + b
#stringify
def mult_numbers(a, b):
return a * b
You'll notice that your original tests continue to work after this refactoring. It doesn't matter how you implemented add_numbers and mult_numbers; what matters is they continue to work as defined: returing a stringified result of the desired operation.
The only remaining test you need to write is one to verify that stringify does what it is intended to do: return the result of the decorated function as a string, which your test_stringify does.
Your issue seems to be that you want to treat the unwrapped function, the decorator, and the wrapped function as units. But if that's the case, then you are missing one unit test: the one that actually runs add_wrapper and tests its output, rather than just add_wrapper.__wrapped__. It doesn't really matter if you consider testing the wrapped function as a unit test or an integration test, but whatever you call it, you need to write it, because as you pointed out, it's not sufficient to test just the unwrapped function and the decorator separately.

using self in python #patch decorator

I'm trying to use python's mock.patch to implement unit tests with nose.
class A:
def setUp(self):
self.b = 8 #contrived example
#patch.object('module.class', 'function', lambda x: self.b)
def testOne(self):
# do test #
Here, patch complains that it doesnt know self (which is correct). What is best way to get this kind of functionality in a clean fashion?
I know I can use a global variable, or that I can mock it within the test (but that involves me cleaning up the objects at the end of the test).
You cannot use self on method decorator because you are in the class definition and the object doesn't exist. If you really want to access to self and not just use some static values you can consider follow approach: totest is a module in my python path and fn is the method that I would patch, moreover I'm using a fixed return_value instead a function for a more readable example
class MyTestCase(unittest.TestCase):
def setUp(self):
self.b = 8 #contrived example
def testOne(self):
with patch('totest.fn', return_value=self.b) as m:
self.assertEqual(self.b, m())
self.assertTrue(m.called)
#patch("totest.fn")
def testTwo(self,m):
m.return_value = self.b
self.assertEqual(self.b, m())
self.assertTrue(m.called)
In testOne() I use patch as a context and I will have the full access to self. In testTwo() (that is my standard way) I set up my mock m at the start of the test and then use it.
Finally I used patch() instead of patch.object() because I don't really understand why you need patch.object() but you can change it as you like.

Python 3: Calling a Function from a class, self

I am trying to learn about classes, can someone explain to me why this code is not working. I thought when calling a function from a class, "self" is automatically ommitted, but the interpreter tells me that argument "a" is missing (he thinks self = 10).
#! coding=utf-8
class test:
def __init__(self):
"do something here"
def do(self,a):
return a**2
d = test.do
print(d(10))
Instantiate the class first:
d = test().do
print(d(10)) # prints 100
test.do is an unbound method, test().do is bound. The difference is explained in this thread: Class method differences in Python: bound, unbound and static.
You have to instantiate the class first:
d = test()
then you can call a method:
print(d.do(10))
if you want to use method statically you have to declare it in python
#! coding=utf-8
class test:
def __init__(self):
"do something here"
#staticmethod
def do(a):
return a**2
d = test.do
print(d(10)) #and that's work
Since you haven't instantiated the class (a fancy term for created) you can't be assigning methods to any random variable. Like already said, you must create the object first, whilst making sure the method you call is a part of the class you called or connected to the class in some way (such as creating another class and then communicating that class with the current class). So you should first type d=test() followed by d.do().
Also, remember that in your declaration of the method you crated a parameter so what you done was wrong in itself anyway, because when you declared the do function, you should have put within the brackets the number you wanted to send to the method to calculate its square. So you type test.do(10) and then the 10 is sent by the self reference to the method to be done whatever it is you told it to do.
One more thing: although it isn't a huge deal, it helps if all of your class names begin with a capital letter, as this is usually the 'pythonic' way to do things, and it also makes your code much easier to read, because when you first called the class, somebody could easily mistaken it for an ordinary function
class test:
def __init__(self):
"do something here"
def do(self,a):
return a**2
def __call__(self,a):
return self.do(a)
a = test
test.do(a,10)
#or
a = test().do
a(10)
#or
a = test()
test.do(a,10)
#or
a = test()
print(a(10))

Categories