Is there a function that allows for verbose assertions? - python

Frequently when I am writing code I add assertions for my own sanity, specifically for when I am first writing the code to ensure I do not implement it with bugs (I understand assertions are ignored in production builds of code). Whenever I write an assertion, I find myself verbosely writing something along the lines of:
assert a == b, f"{a} != {b}"
So that if my assertion fails, I have some way of interpreting why. Unfortunately, this often clutters my code, as if the variable names are long (or I am in nested structurees), adding the message at the end often makes the line too long to be consistent with PEP 8. So I spend the time breaking it up over several lines, which also makes a basic assert a multi-lined statement.
unittest provides helpful assert methods like assertEqual which shorten this work, make it less prone to me leaving out the error message, and allows me to continue on with my work rather than worrying about assertions. The only two options I can come up with are:
Write my own function to be verbose. This would require re-writing many of the assert functions; or
Instantiate an instance of unittest.TestCase() each time I want to make an assertion.
The problem with number 2 is it is unbelievably slow:
>>> min(timeit.repeat('unittest.TestCase().assertEqual(a, b)', setup='import unittest; a = 1; b = 1'))
1.7452130000000352
>>> min(timeit.repeat('assert a == b, f"{a} != {b}"', setup='a = 1; b = 1'))
0.01641979999999421
Which is a hint that this would be bad practice ie. unittest should be used for unit testing, not generic assertions.
I'll admit this question is a bit nit-picky, but StackOverflow often has the ability the enlighten me on libraries/ideas that make programming faster, safer, and better, so I'd like to know: is there a better (built-in?) method for verbose assertions, or should I simply continue writing them verbosely as I have before?

You could adapt some code from unittest to create some functions which do not require to instantiate a full-blown TestCase, here is a small example :
import difflib
import pprint
from unittest.util import safe_repr, _common_shorten_repr
class Asserter: # this class consists mostly of code from `unittest.TestCase`
def assertEqual(self, first, second, msg=None):
assertion_func = self._getAssertEqualityFunc(first, second)
assertion_func(first, second, msg=msg)
def _getAssertEqualityFunc(self, first, second):
if type(first) is type(second):
asserter = {
dict: self.assertDictEqual,
# ...
}.get(type(first))
if asserter is not None:
return asserter
return self._baseAssertEqual
def _baseAssertEqual(self, first, second, msg=None):
"""The default assertEqual implementation, not type specific."""
if not first == second:
standardMsg = '%s != %s' % _common_shorten_repr(first, second)
raise AssertionError('%s : %s' % (standardMsg, msg))
def assertDictEqual(self, d1, d2, msg=None):
self.assertIsInstance(d1, dict, 'First argument is not a dictionary')
self.assertIsInstance(d2, dict, 'Second argument is not a dictionary')
if d1 != d2:
standardMsg = '%s != %s' % _common_shorten_repr(d1, d2)
diff = ('\n' + '\n'.join(difflib.ndiff(
pprint.pformat(d1).splitlines(),
pprint.pformat(d2).splitlines())))
standardMsg = standardMsg + diff
raise AssertionError('%s : %s' % (standardMsg, msg))
def assertIsInstance(self, obj, cls, msg=None):
if not isinstance(obj, cls):
standardMsg = '%s is not an instance of %r' % (safe_repr(obj), cls)
raise AssertionError('%s : %s' % (standardMsg, msg))
__ASSERTER = Asserter() # only one required
assertEqual = __ASSERTER.assertEqual # get a reference to the method
def main():
assertEqual({'a': 1, 'b': 2},
{'a': 1, 'b': 456789})
if __name__ == "__main__":
main()
which produces :
Traceback (most recent call last):
File "/home/stack_overflow/so70675609.py", line 56, in <module>
main()
File "/home/stack_overflow/so70675609.py", line 51, in main
assertEqual({'a': 1, 'b': 2},
File "/home/stack_overflow/so70675609.py", line 11, in assertEqual
assertion_func(first, second, msg=msg)
File "/home/stack_overflow/so70675609.py", line 38, in assertDictEqual
raise AssertionError('%s : %s' % (standardMsg, msg))
AssertionError: {'a': 1, 'b': 2} != {'a': 1, 'b': 456789}
- {'a': 1, 'b': 2}
? ^
+ {'a': 1, 'b': 456789}
? ^^^^^^
: None

Related

Timed nose tests not failing properly

I have the following test that does not fail when running an especially long fib assert.
Tests that don't fail properly
#!/usr/env/bin python2.7
import unittest
from fib import fib
from nose.tools import timed
def test_gen(expected, actual):
#timed(.001)
def test_method(self):
return self.assertEqual(expected, actual)
return test_method
if __name__ == '__main__':
all_cases = {
'user': ((fib(40), 102334155), (fib(2), 1), (fib(5), 5)),
}
fails = {}
for username, cases in all_cases.items():
class FibTests(unittest.TestCase):
pass
for index, case in enumerate(cases):
test_name = 'test_{0}_{1}'.format(username, index)
test = test_gen(case[1], case[0])
setattr(FibTests, test_name, test)
suite = unittest.TestLoader().loadTestsFromTestCase(FibTests)
result = unittest.TextTestRunner(verbosity=2).run(suite)
fails[username] = len(result.failures)
print fails
(Slow) Fib.py Implementation
def fib(x):
if x == 0:
return 0
elif x == 1:
return 1
return fib(x - 2) + fib(x - 1)
Tests that fail properly
import unittest
from fib import fib
from nose.tools import timed
def test_gen(expected, actual):
#timed(.001)
def test_method(self):
time.sleep(.2)
return self.assertEqual(expected, actual)
return test_method
You are timing the wrong thing, and never actually calling your test method. You are also going to an awful lot of effort to dynamically create and add your cases to your class that does nothing but act as a container for tests when nose supports generator test cases, which would be much easier to read and follow than what you have here. Also, is this a test file or a piece of product code? If it's a test file, then having all of that code in if __name__ == '__main__' is kind of odd, and if it is a product code file, then having the test_gen function and the unittest and nose import statements in the uncoditionally run part doesn't make much sense. I'd recommend doing it the following way, and not trying to make the test script self-runnable; just launch it with nose.
from fib import fib
from nose.tools import timed
fib = timed(.001)(fib)
def execute(username, fib_arg, expected_output):
result = fib(fib_arg)
assert result == expected_output, ('%s fib(%d) got %d, expected %d'
% (username, fib_arg, result, expected_output))
def test_fib():
for name, datasets in (('user', ((40, 102334155), (2, 1), (5, 5))),):
for arg, expected in datasets:
yield execute, name, arg, expected

How can I get the values of the locals of a function after it has been executed?

Suppose I have a function like f(a, b, c=None). The aim is to call the function like f(*args, **kwargs), and then construct a new set of args and kwargs such that:
If the function had default values, I should be able to acquire their values. For example, if I call it like f(1, 2), I should be able to get the tuple (1, 2, None) and/or the dictionary {'c': None}.
If the value of any of the arguments was modified inside the function, get the new value. For example, if I call it like f(1, 100000, 3) and the function does if b > 500: b = 5 modifying the local variable, I should be able to get the the tuple (1, 5, 3).
The aim here is to create a a decorator that finishes the job of a function. The original function acts as a preamble setting up the data for the actual execution, and the decorator finishes the job.
Edit: I'm adding an example of what I'm trying to do. It's a module for making proxies for other classes.
class Spam(object):
"""A fictional class that we'll make a proxy for"""
def eggs(self, start, stop, step):
"""A fictional method"""
return range(start, stop, step)
class ProxyForSpam(clsproxy.Proxy):
proxy_for = Spam
#clsproxy.signature_preamble
def eggs(self, start, stop, step=1):
start = max(0, start)
stop = min(100, stop)
And then, we'll have that:
ProxyForSpam().eggs(-10, 200) -> Spam().eggs(0, 100, 1)
ProxyForSpam().eggs(3, 4) -> Spam().eggs(3, 4, 1)
There are two recipes available here, one which requires an external library and another that uses only the standard library. They don't quite do what you want, in that they actually modify the function being executed to obtain its locals() rather than obtain the locals() after function execution, which is impossible, since the local stack no longer exists after the function finishes execution.
Another option is to see what debuggers, such as WinPDB or even the pdb module do. I suspect they use the inspect module (possibly along with others), to get the frame inside which a function is executing and retrieve locals() that way.
EDIT: After reading some code in the standard library, the file you want to look at is probably bdb.py, which should be wherever the rest of your Python standard library is. Specifically, look at set_trace() and related functions. This will give you an idea of how the Python debugger breaks into the class. You might even be able to use it directly. To get the frame to pass to set_trace() look at the inspect module.
I've stumbled upon this very need today and wanted to share my solution.
import sys
def call_function_get_frame(func, *args, **kwargs):
"""
Calls the function *func* with the specified arguments and keyword
arguments and snatches its local frame before it actually executes.
"""
frame = None
trace = sys.gettrace()
def snatch_locals(_frame, name, arg):
nonlocal frame
if frame is None and name == 'call':
frame = _frame
sys.settrace(trace)
return trace
sys.settrace(snatch_locals)
try:
result = func(*args, **kwargs)
finally:
sys.settrace(trace)
return frame, result
The idea is to use sys.trace() to catch the frame of the next 'call'. Tested on CPython 3.6.
Example usage
import types
def namespace_decorator(func):
frame, result = call_function_get_frame(func)
try:
module = types.ModuleType(func.__name__)
module.__dict__.update(frame.f_locals)
return module
finally:
del frame
#namespace_decorator
def mynamespace():
eggs = 'spam'
class Bar:
def hello(self):
print("Hello, World!")
assert mynamespace.eggs == 'spam'
mynamespace.Bar().hello()
I don't see how you could do this non-intrusively -- after the function is done executing, it doesn't exist any more -- there's no way you can reach inside something that doesn't exist.
If you can control the functions that are being used, you can do an intrusive approach like
def fn(x, y, z, vars):
'''
vars is an empty dict that we use to pass things back to the caller
'''
x += 1
y -= 1
z *= 2
vars.update(locals())
>>> updated = {}
>>> fn(1, 2, 3, updated)
>>> print updated
{'y': 1, 'x': 2, 'z': 6, 'vars': {...}}
>>>
...or you can just require that those functions return locals() -- as #Thomas K asks above, what are you really trying to do here?
Witchcraft below read on your OWN danger(!)
I have no clue what you want to do with this, it's possible but it's an awful hack...
Anyways, I HAVE WARNED YOU(!), be lucky if such things don't work in your favorite language...
from inspect import getargspec, ismethod
import inspect
def main():
#get_modified_values
def foo(a, f, b):
print a, f, b
a = 10
if a == 2:
return a
f = 'Hello World'
b = 1223
e = 1
c = 2
foo(e, 1000, b = c)
# intercept a function and retrieve the modifed values
def get_modified_values(target):
def wrapper(*args, **kwargs):
# get the applied args
kargs = getcallargs(target, *args, **kwargs)
# get the source code
src = inspect.getsource(target)
lines = src.split('\n')
# oh noes string patching of the function
unindent = len(lines[0]) - len(lines[0].lstrip())
indent = lines[0][:len(lines[0]) - len(lines[0].lstrip())]
lines[0] = ''
lines[1] = indent + 'def _temp(_args, ' + lines[1].split('(')[1]
setter = []
for k in kargs.keys():
setter.append('_args["%s"] = %s' % (k, k))
i = 0
while i < len(lines):
indent = lines[i][:len(lines[i]) - len(lines[i].lstrip())]
if lines[i].find('return ') != -1 or lines[i].find('return\n') != -1:
for e in setter:
lines.insert(i, indent + e)
i += len(setter)
elif i == len(lines) - 2:
for e in setter:
lines.insert(i + 1, indent + e)
break
i += 1
for i in range(0, len(lines)):
lines[i] = lines[i][unindent:]
data = '\n'.join(lines) + "\n"
# setup variables
frame = inspect.currentframe()
loc = inspect.getouterframes(frame)[1][0].f_locals
glob = inspect.getouterframes(frame)[1][0].f_globals
loc['_temp'] = None
# compile patched function and call it
func = compile(data, '<witchstuff>', 'exec')
eval(func, glob, loc)
loc['_temp'](kargs, *args, **kwargs)
# there you go....
print kargs
# >> {'a': 10, 'b': 1223, 'f': 'Hello World'}
return wrapper
# from python 2.7 inspect module
def getcallargs(func, *positional, **named):
"""Get the mapping of arguments to values.
A dict is returned, with keys the function argument names (including the
names of the * and ** arguments, if any), and values the respective bound
values from 'positional' and 'named'."""
args, varargs, varkw, defaults = getargspec(func)
f_name = func.__name__
arg2value = {}
# The following closures are basically because of tuple parameter unpacking.
assigned_tuple_params = []
def assign(arg, value):
if isinstance(arg, str):
arg2value[arg] = value
else:
assigned_tuple_params.append(arg)
value = iter(value)
for i, subarg in enumerate(arg):
try:
subvalue = next(value)
except StopIteration:
raise ValueError('need more than %d %s to unpack' %
(i, 'values' if i > 1 else 'value'))
assign(subarg,subvalue)
try:
next(value)
except StopIteration:
pass
else:
raise ValueError('too many values to unpack')
def is_assigned(arg):
if isinstance(arg,str):
return arg in arg2value
return arg in assigned_tuple_params
if ismethod(func) and func.im_self is not None:
# implicit 'self' (or 'cls' for classmethods) argument
positional = (func.im_self,) + positional
num_pos = len(positional)
num_total = num_pos + len(named)
num_args = len(args)
num_defaults = len(defaults) if defaults else 0
for arg, value in zip(args, positional):
assign(arg, value)
if varargs:
if num_pos > num_args:
assign(varargs, positional[-(num_pos-num_args):])
else:
assign(varargs, ())
elif 0 < num_args < num_pos:
raise TypeError('%s() takes %s %d %s (%d given)' % (
f_name, 'at most' if defaults else 'exactly', num_args,
'arguments' if num_args > 1 else 'argument', num_total))
elif num_args == 0 and num_total:
raise TypeError('%s() takes no arguments (%d given)' %
(f_name, num_total))
for arg in args:
if isinstance(arg, str) and arg in named:
if is_assigned(arg):
raise TypeError("%s() got multiple values for keyword "
"argument '%s'" % (f_name, arg))
else:
assign(arg, named.pop(arg))
if defaults: # fill in any missing values with the defaults
for arg, value in zip(args[-num_defaults:], defaults):
if not is_assigned(arg):
assign(arg, value)
if varkw:
assign(varkw, named)
elif named:
unexpected = next(iter(named))
if isinstance(unexpected, unicode):
unexpected = unexpected.encode(sys.getdefaultencoding(), 'replace')
raise TypeError("%s() got an unexpected keyword argument '%s'" %
(f_name, unexpected))
unassigned = num_args - len([arg for arg in args if is_assigned(arg)])
if unassigned:
num_required = num_args - num_defaults
raise TypeError('%s() takes %s %d %s (%d given)' % (
f_name, 'at least' if defaults else 'exactly', num_required,
'arguments' if num_required > 1 else 'argument', num_total))
return arg2value
main()
Output:
1 1000 2
{'a': 10, 'b': 1223, 'f': 'Hello World'}
There you go... I'm not responsible for any small children that get eaten by demons or something the like (or if it breaks on complicated functions).
PS: The inspect module is the pure EVIL.
Since you are trying to manipulate variables in one function, and do some job based on those variables on another function, the cleanest way to do it is having these variables to be an object's attributes.
It could be a dictionary - that could be defined inside the decorator - therefore access to it inside the decorated function would be as a "nonlocal" variable. That cleans up the default parameter tuple of this dictionary, that #bgporter proposed.:
def eggs(self, a, b, c=None):
# nonlocal parms ## uncomment in Python 3
parms["a"] = a
...
To be even more clean, you probably should have all these parameters as attributes of the instance (self) - so that no "magical" variable has to be used inside the decorated function.
As for doing it "magically" without having the parameters set as attributes of certain object explicitly, nor having the decorated function to return the parameters themselves (which is also an option) - that is, to have it to work transparently with any decorated function - I can't think of a way that does not involve manipulating the bytecode of the function itself.
If you can think of a way to make the wrapped function raise an exception at return time, you could trap the exception and check the execution trace.
If it is so important to do it automatically that you consider altering the function bytecode an option, feel free to ask me further.

Python library 'unittest': Generate multiple tests programmatically [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How do you generate dynamic (parameterized) unit tests in Python?
I have a function to test, under_test, and a set of expected input/output pairs:
[
(2, 332),
(234, 99213),
(9, 3),
# ...
]
I would like each one of these input/output pairs to be tested in its own test_* method. Is that possible?
This is sort of what I want, but forcing every single input/output pair into a single test:
class TestPreReqs(unittest.TestCase):
def setUp(self):
self.expected_pairs = [(23, 55), (4, 32)]
def test_expected(self):
for exp in self.expected_pairs:
self.assertEqual(under_test(exp[0]), exp[1])
if __name__ == '__main__':
unittest.main()
(Also, do I really want to be putting that definition of self.expected_pairs in setUp?)
UPDATE: Trying doublep's advice:
class TestPreReqs(unittest.TestCase):
def setUp(self):
expected_pairs = [
(2, 3),
(42, 11),
(3, None),
(31, 99),
]
for k, pair in expected_pairs:
setattr(TestPreReqs, 'test_expected_%d' % k, create_test(pair))
def create_test (pair):
def do_test_expected(self):
self.assertEqual(get_pre_reqs(pair[0]), pair[1])
return do_test_expected
if __name__ == '__main__':
unittest.main()
This does not work. 0 tests are run. Did I adapt the example incorrectly?
I had to do something similar. I created simple TestCase subclasses that took a value in their __init__, like this:
class KnownGood(unittest.TestCase):
def __init__(self, input, output):
super(KnownGood, self).__init__()
self.input = input
self.output = output
def runTest(self):
self.assertEqual(function_to_test(self.input), self.output)
I then made a test suite with these values:
def suite():
suite = unittest.TestSuite()
suite.addTests(KnownGood(input, output) for input, output in known_values)
return suite
You can then run it from your main method:
if __name__ == '__main__':
unittest.TextTestRunner().run(suite())
The advantages of this are:
As you add more values, the number of reported tests increases, which makes you feel like you are doing more.
Each individual test case can fail individually
It's conceptually simple, since each input/output value is converted into one TestCase
Not tested:
class TestPreReqs(unittest.TestCase):
...
def create_test (pair):
def do_test_expected(self):
self.assertEqual(under_test(pair[0]), pair[1])
return do_test_expected
for k, pair in enumerate ([(23, 55), (4, 32)]):
test_method = create_test (pair)
test_method.__name__ = 'test_expected_%d' % k
setattr (TestPreReqs, test_method.__name__, test_method)
If you use this often, you could prettify this by using utility functions and/or decorators, I guess. Note that pairs are not an attribute of TestPreReqs object in this example (and so setUp is gone). Rather, they are "hardwired" in a sense to the TestPreReqs class.
As often with Python, there is a complicated way to provide a simple solution.
In that case, we can use metaprogramming, decorators, and various nifty Python tricks to achieve a nice result. Here is what the final test will look like:
import unittest
# Some magic code will be added here later
class DummyTest(unittest.TestCase):
#for_examples(1, 2)
#for_examples(3, 4)
def test_is_smaller_than_four(self, value):
self.assertTrue(value < 4)
#for_examples((1,2),(2,4),(3,7))
def test_double_of_X_is_Y(self, x, y):
self.assertEqual(2 * x, y)
if __name__ == "__main__":
unittest.main()
When executing this script, the result is:
..F...F
======================================================================
FAIL: test_double_of_X_is_Y(3,7)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/xdecoret/Documents/foo.py", line 22, in method_for_example
method(self, *example)
File "/Users/xdecoret/Documents/foo.py", line 41, in test_double_of_X_is_Y
self.assertEqual(2 * x, y)
AssertionError: 6 != 7
======================================================================
FAIL: test_is_smaller_than_four(4)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/xdecoret/Documents/foo.py", line 22, in method_for_example
method(self, *example)
File "/Users/xdecoret/Documents/foo.py", line 37, in test_is_smaller_than_four
self.assertTrue(value < 4)
AssertionError
----------------------------------------------------------------------
Ran 7 tests in 0.001s
FAILED (failures=2)
which achieves our goal:
it is unobtrusive: we derive from TestCase as usual
we write parametrized tests only once
each example value is considered an individual test
the decorator can be stacked, so it is easy to use sets of examples (e.g., using a function to build the list of values from example files or directories)
The icing on the cake is it works for arbitrary arity of the signature
So how does it work? Basically, the decorator stores the examples in an attribute of the function. We use a metaclass to replace every decorated function with a list of functions. And we replace the unittest.TestCase with our new magic code (to be pasted in the "magic" comment above) is:
__examples__ = "__examples__"
def for_examples(*examples):
def decorator(f, examples=examples):
setattr(f, __examples__, getattr(f, __examples__,()) + examples)
return f
return decorator
class TestCaseWithExamplesMetaclass(type):
def __new__(meta, name, bases, dict):
def tuplify(x):
if not isinstance(x, tuple):
return (x,)
return x
for methodname, method in dict.items():
if hasattr(method, __examples__):
dict.pop(methodname)
examples = getattr(method, __examples__)
delattr(method, __examples__)
for example in (tuplify(x) for x in examples):
def method_for_example(self, method = method, example = example):
method(self, *example)
methodname_for_example = methodname + "(" + ", ".join(str(v) for v in example) + ")"
dict[methodname_for_example] = method_for_example
return type.__new__(meta, name, bases, dict)
class TestCaseWithExamples(unittest.TestCase):
__metaclass__ = TestCaseWithExamplesMetaclass
pass
unittest.TestCase = TestCaseWithExamples
If someone wants to package this nicely, or propose a patch for unittest, feel free! A quote of my name will be appreciated.
The code can be made much simpler and fully encapsulated in the decorator if you are ready to use frame introspection (import the sys module)
def for_examples(*parameters):
def tuplify(x):
if not isinstance(x, tuple):
return (x,)
return x
def decorator(method, parameters=parameters):
for parameter in (tuplify(x) for x in parameters):
def method_for_parameter(self, method=method, parameter=parameter):
method(self, *parameter)
args_for_parameter = ",".join(repr(v) for v in parameter)
name_for_parameter = method.__name__ + "(" + args_for_parameter + ")"
frame = sys._getframe(1) # pylint: disable-msg=W0212
frame.f_locals[name_for_parameter] = method_for_parameter
return None
return decorator
nose (suggested by #Paul Hankin)
#!/usr/bin/env python
# file: test_pairs_nose.py
from nose.tools import eq_ as eq
from mymodule import f
def test_pairs():
for input, output in [ (2, 332), (234, 99213), (9, 3), ]:
yield _test_f, input, output
def _test_f(input, output):
try:
eq(f(input), output)
except AssertionError:
if input == 9: # expected failure
from nose.exc import SkipTest
raise SkipTest("expected failure")
else:
raise
if __name__=="__main__":
import nose; nose.main()
Example:
$ nosetests test_pairs_nose -v
test_pairs_nose.test_pairs(2, 332) ... ok
test_pairs_nose.test_pairs(234, 99213) ... ok
test_pairs_nose.test_pairs(9, 3) ... SKIP: expected failure
----------------------------------------------------------------------
Ran 3 tests in 0.001s
OK (SKIP=1)
unittest (an approach similar to #doublep's one)
#!/usr/bin/env python
import unittest2 as unittest
from mymodule import f
def add_tests(generator):
def class_decorator(cls):
"""Add tests to `cls` generated by `generator()`."""
for f, input, output in generator():
test = lambda self, i=input, o=output, f=f: f(self, i, o)
test.__name__ = "test_%s(%r, %r)" % (f.__name__, input, output)
setattr(cls, test.__name__, test)
return cls
return class_decorator
def _test_pairs():
def t(self, input, output):
self.assertEqual(f(input), output)
for input, output in [ (2, 332), (234, 99213), (9, 3), ]:
tt = t if input != 9 else unittest.expectedFailure(t)
yield tt, input, output
class TestCase(unittest.TestCase):
pass
TestCase = add_tests(_test_pairs)(TestCase)
if __name__=="__main__":
unittest.main()
Example:
$ python test_pairs_unit2.py -v
test_t(2, 332) (__main__.TestCase) ... ok
test_t(234, 99213) (__main__.TestCase) ... ok
test_t(9, 3) (__main__.TestCase) ... expected failure
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK (expected failures=1)
If you don't want to install unittest2 then add:
try:
import unittest2 as unittest
except ImportError:
import unittest
if not hasattr(unittest, 'expectedFailure'):
import functools
def _expectedFailure(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except AssertionError:
pass
else:
raise AssertionError("UnexpectedSuccess")
return wrapper
unittest.expectedFailure = _expectedFailure
Some of the tools available for doing parametrized tests in Python are:
Nose test generators (only for function tests, not TestCase classes)
nose-parametrized by David Wolever (also for TestCase classes)
Unittest template by Boris Feld
Parametrized tests in py.test
parametrized-testcase by Austin Bingham
See also question 1676269 for more answers to this question.
I think Rory's solution is the cleanest and shortest. However, this variation of doublep's "create synthetic functions in a TestCase" also works:
from functools import partial
class TestAllReports(unittest.TestCase):
pass
def test_spamreport(name):
assert classify(getSample(name))=='spamreport', name
for rep in REPORTS:
testname = 'test_' + rep
testfunc = partial(test_spamreport, rep)
testfunc.__doc__ = testname
setattr(TestAllReports, testname, testfunc)
if __name__=='__main__':
unittest.main(argv=sys.argv + ['--verbose'])

Real-world examples of nested functions

I asked previously how the nested functions work, but unfortunately I still don't quite get it. To understand it better, can someone please show some real-wold, practical usage examples of nested functions?
Many thanks
Your question made me curious, so I looked in some real-world code: the Python standard library. I found 67 examples of nested functions. Here are a few, with explanations.
One very simple reason to use a nested function is simply that the function you're defining doesn't need to be global, because only the enclosing function uses it. A typical example from Python's quopri.py standard library module:
def encode(input, output, quotetabs, header = 0):
...
def write(s, output=output, lineEnd='\n'):
# RFC 1521 requires that the line ending in a space or tab must have
# that trailing character encoded.
if s and s[-1:] in ' \t':
output.write(s[:-1] + quote(s[-1]) + lineEnd)
elif s == '.':
output.write(quote(s) + lineEnd)
else:
output.write(s + lineEnd)
... # 35 more lines of code that call write in several places
Here there was some common code within the encode function, so the author simply factored it out into a write function.
Another common use for nested functions is re.sub. Here's some code from the json/encode.py standard library module:
def encode_basestring(s):
"""Return a JSON representation of a Python string
"""
def replace(match):
return ESCAPE_DCT[match.group(0)]
return '"' + ESCAPE.sub(replace, s) + '"'
Here ESCAPE is a regular expression, and ESCAPE.sub(replace, s) finds all matches of ESCAPE in s and replaces each one with replace(match).
In fact, any API, like re.sub, that accepts a function as a parameter can lead to situations where nested functions are convenient. For example, in turtle.py there's some silly demo code that does this:
def baba(xdummy, ydummy):
clearscreen()
bye()
...
tri.write(" Click me!", font = ("Courier", 12, "bold") )
tri.onclick(baba, 1)
onclick expects you to pass an event-handler function, so we define one and pass it in.
Decorators are a very popular use for nested functions. Here's an example of a decorator that prints a statement before and after any call to the decorated function.
def entry_exit(f):
def new_f(*args, **kwargs):
print "Entering", f.__name__
f(*args, **kwargs)
print "Exited", f.__name__
return new_f
#entry_exit
def func1():
print "inside func1()"
#entry_exit
def func2():
print "inside func2()"
func1()
func2()
print func1.__name__
Nested functions avoid cluttering other parts of the program with other functions and variables that only make sense locally.
A function that return Fibonacci numbers could be defined as follows:
>>> def fib(n):
def rec():
return fib(n-1) + fib(n-2)
if n == 0:
return 0
elif n == 1:
return 1
else:
return rec()
>>> map(fib, range(10))
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
EDIT: In practice, generators would be a better solution for this, but the example shows how to take advantage of nested functions.
They are useful when using functions that take other functions as input. Say you're in a function, and want to sort a list of items based on the items' value in a dict:
def f(items):
vals = {}
for i in items: vals[i] = random.randint(0,100)
def key(i): return vals[i]
items.sort(key=key)
You can just define key right there and have it use vals, a local variable.
Another use-case is callbacks.
I have only had to use nested functions when creating decorators. A nested function is basically a way of adding some behavior to a function without knowing what the function is that you are adding behavior to.
from functools import wraps
from types import InstanceType
def printCall(func):
def getArgKwargStrings(*args, **kwargs):
argsString = "".join(["%s, " % (arg) for arg in args])
kwargsString = "".join(["%s=%s, " % (key, value) for key, value in kwargs.items()])
if not len(kwargs):
if len(argsString):
argsString = argsString[:-2]
else:
kwargsString = kwargsString[:-2]
return argsString, kwargsString
#wraps(func)
def wrapper(*args, **kwargs):
ret = None
if args and isinstance(args[0], InstanceType) and getattr(args[0], func.__name__, None):
instance, args = args[0], args[1:]
argsString, kwargsString = getArgKwargStrings(*args, **kwargs)
ret = func(instance, *args, **kwargs)
print "Called %s.%s(%s%s)" % (instance.__class__.__name__, func.__name__, argsString, kwargsString)
print "Returned %s" % str(ret)
else:
argsString, kwargsString = getArgKwargStrings(*args, **kwargs)
ret = func(*args, **kwargs)
print "Called %s(%s%s)" % (func.__name__, argsString, kwargsString)
print "Returned %s" % str(ret)
return ret
return wrapper
def sayHello(name):
print "Hello, my name is %s" % (name)
if __name__ == "__main__":
sayHelloAndPrintDebug = printCall(sayHello)
name = "Nimbuz"
sayHelloAndPrintDebug(name)
Ignore all the mumbo jumbo in the "printCall" function for right now and focus only the "sayHello" function and below. What we're doing here is we want to print out how the "sayHello" function was called everytime it is called without knowing or altering what the "sayHello" function does. So we redefine the "sayHello" function by passing it to "printCall", which returns a NEW function that does what the "sayHello" function does AND prints how the "sayHello" function was called. This is the concept of decorators.
Putting "#printCall" above the sayHello definition accomplishes the same thing:
#printCall
def sayHello(name):
print "Hello, my name is %s" % (name)
if __name__ == "__main__":
name = "Nimbuz"
sayHello(name)
Yet another (very simple) example. A function that returns another function. Note how the inner function (that is returned) can use variables from the outer function's scope.
def create_adder(x):
def _adder(y):
return x + y
return _adder
add2 = create_adder(2)
add100 = create_adder(100)
>>> add2(50)
52
>>> add100(50)
150
Python Decorators
This is actually another topic to learn, but if you look at the stuff on 'Using Functions as Decorators', you'll see some examples of nested functions.
OK, besides decorators: Say you had an application where you needed to sort a list of strings based on substrings which varied from time to time. Now the sorted functions takes a key= argument which is a function of one argument: the items (strings in this case) to be sorted. So how to tell this function which substrings to sort on? A closure or nested function, is perfect for this:
def sort_key_factory(start, stop):
def sort_key(string):
return string[start: stop]
return sort_key
Simple eh? You can expand on this by encapsulating start and stop in a tuple or a slice object and then passing a sequence or iterable of these to the sort_key_factory.

Get last function's call arguments from traceback?

Can I get the parameters of the last function called in traceback? How?
I want to make a catcher for standard errors to make readable code, yet provide detailed information to user.
In the following example I want GET_PARAMS to return me a tuple of parameters supplied to os.chown. Examining the inspect module advised by Alex Martelli, I couldn't find that.
def catch_errors(fn):
def decorator(*args, **kwargs):
try:
return fn(*args, **kwargs)
except (IOError, OSError):
msg = sys.exc_info()[2].tb_frame.f_locals['error_message']
quit(msg.format(SEQUENCE_OF_PARAMETERS_OF_THE_LAST_FUNCTION_CALLED)\
+ '\nError #{0[0]}: {0[1]}'.format(sys.exc_info()[1].args), 1)
return decorator
#catch_errors
def do_your_job():
error_message = 'Can\'t change folder ownership \'{0}\' (uid:{1}, gid:{2})'
os.chown('/root', 1000, 1000) # note that params aren't named vars.
if __name == '__main__' and os.getenv('USERNAME') != 'root':
do_your_job()
(Thanks to Jim Robert for the decorator)
For such inspection tasks, always think first of module inspect in the standard library. Here, inspect.getargvalues gives you the argument values given a frame, and inspect.getinnerframes gives you the frames of interest from a traceback object.
Here is an example of such function and some problems that you can't get around:
import sys
def get_params(tb):
while tb.tb_next:
tb = tb.tb_next
frame = tb.tb_frame
code = frame.f_code
argcount = code.co_argcount
if code.co_flags & 4: # *args
argcount += 1
if code.co_flags & 8: # **kwargs
argcount += 1
names = code.co_varnames[:argcount]
params = {}
for name in names:
params[name] = frame.f_locals.get(name, '<deleted>')
return params
def f(a, b=2, c=3, *d, **e):
del c
c = 4
e['g'] = 6
assert False
try:
f(1, f=5)
except:
print get_params(sys.exc_info()[2])
The output is:
{'a': 1, 'c': 4, 'b': 2, 'e': {'g': 6, 'f': 5}, 'd': ()}
I didn't used inspect.getinnerframes() to show another way to get needed frame. Although it simplifies a bit, it also do some extra work that is not needed for you while being relatively slow (inspect.getinnerframes() reads source file for every module in traceback; this is not important for one debugging call, but could be an issue in other cases).
The problem with using a decorator for what you're trying to achieve is that the frame the exception handler gets is do_your_job()s, not os.listdir()s, os.makedirs()s or os.chown()s. So the information you'll be printing out is the arguments to do_your_job(). In order to get the behavior I think you intend, you would have to decorate all the library functions you're calling.

Categories