The Problem:
We've been using nose test runner for quite a while.
From time to time, I see our tests having eq_() calls:
eq_(actual, expected)
instead of the common:
self.assertEqual(actual, expected)
The question:
Is there any benefit of using nose.tools.eq_ as opposed to the standard unittest framework's assertEqual()? Are they actually equivalent?
Thoughts:
Well, for one, eq_ is shorter, but it has to be imported from nose.tools which makes the tests dependent on the test runner library which can make it more difficult to switch to a different test runner, say py.test. On the other hand, we are also using #istest, #nottest and #attr nose decorators a lot.
They aren't equivalent to unittest.TestCase.assertEqual.
nose.tools.ok_(expr, msg=None)
Shorthand for assert. Saves 3 whole characters!
nose.tools.eq_(a, b, msg=None)
Shorthand for assert a == b, "%r != %r" % (a, b)
https://nose.readthedocs.org/en/latest/testing_tools.html#nose.tools.ok_
These docs are however slightly misleading. If you check the source you'll see eq_ actually is:
def eq_(a, b, msg=None):
if not a == b:
raise AssertionError(msg or "%r != %r" % (a, b))
https://github.com/nose-devs/nose/blob/master/nose/tools/trivial.py#L25
This is pretty close to the base case of assertEqual:
def _baseAssertEqual(self, first, second, msg=None):
"""The default assertEqual implementation, not type specific."""
if not first == second:
standardMsg = '%s != %s' % _common_shorten_repr(first, second)
msg = self._formatMessage(msg, standardMsg)
raise self.failureException(msg) # default: AssertionError
https://github.com/python/cpython/blob/9b5ef19c937bf9414e0239f82aceb78a26915215/Lib/unittest/case.py#L805
However, as hinted by the docstring and function name, assertEqual has the potential of being type-specific. This is something you lose with eq_ (or assert a == b, for that matter). unittest.TestCase has special cases for dicts, lists, tuples, sets, frozensets and strs. These mostly seem to facilitate prettier printing of error messages.
But assertEqual is a class member of TestCase, so it can only be used in TestCases. nose.tools.eq_ can be used wherever, like a simple assert.
Related
I have a code with an assert statement in it. Also, I'm doing unit testing on that code and I want to give a condition and if there is an assert statement in the code the test will pass.
def do_something(m, n):
assert m !=0, "m has to be greater then 1"
.....
In unit testing, I want to:
class Test_ (unittest.TestCase):
def test_something(self):
# if m=0
self.assert?????(m=0, ??not sure??)
What to write in order to test if there is an assert statement to m=0?
I saw something about context manager? Is it related?
Since you already have a unittest I'd remove the assert from the tested function and instead raise an exception.
Then you can test that in your unittest with assertRaises.
def do_something(m, n):
if m <= 1:
raise ValueError('m has to be greater than 1')
class Test_(unittest.TestCase):
def test_something(self):
with self.assertRaises(ValueError):
do_something(0, 'whatever')
I mostly agree with #DeepSpace, but if you want to keep using assert, you can test that your function raises AssertionError in the conditions you expect.
def do_something(m, n):
assert m != 0, "m has to be greater than 1"
class TestDoSomething(unittest.TestCase):
def test_raises_for_zero(self):
with self.assertRaises(AssertionError):
do_something(0, 'whatever')
def test_raises_for_negative(self):
with self.assertRaises(AssertionError):
do_something(-1, 'whatever')
And, to expand on your question about context managers. In our examples we use a context manager when we invoke self.assertRaises in the with statements. Context managers are helpful when we temporarily want to change how something works or looks, in this example the context manager helps us catch an exception and make sure that it is raised when we call do_something with certain arguments.
It's generally accepted that using eval is bad practice. The accepted answer to this question states that there is almost always a better alternative. However, the timeit module in the standard library uses it, and I stumbled onto a case where I can't find a better alternative.
The unittest module has assertion functions of the form
self.assert*(..., msg=None)
allowing to assert something, optionally printing msg if it failed. This allows running code like
for i in range(1, 20):
self.assertEqual(foo(i), i, str(i) + ' failed')
Now consider the case where foo can raise an exception, e.g.,
def foo(i):
if i % 5 == 0:
raise ValueError()
return i
then
On the one hand, msg won't be printed, as assertEqual was technically never called for the offending iteration.
On the other hand, fundamentally, foo(i) == i failed to be true (admittedly because foo(i) never finished executing), and so this is a case where it would be useful to print msg.
I would like a version that prints out msg even if the failure cause was an exception - this will allow to understand exactly which invocation failed. Using eval, I could do this by writing a version taking strings, such as the following (which is a somewhat simplified version just to illustrate the point):
def assertEqual(lhs, rhs, msg=None):
try:
lhs_val = eval(lhs)
rhs_val = eval(rhs)
if lhs_val != rhs_val:
raise ValueError()
except:
if msg is not None:
print msg
raise
and then using
for i in range(1, 20):
self.assertEqual('foo(i)', 'i', str(i) + ' failed')
Of course technically it's possible to do it completely differently, by placing each call to assert* within a try/except/finally, but I could only think of extremely verbose alternatives (that also required duplicating msg.)
Is the use of eval legitimate here, then, or is there a better alternative?
If an exception is raised unexpectedly, that would point to a bug in your code. Exactly the case you want to discover with your unit tests. It's not simply not equal, it's a bug you discovered that you need to fix.
If you expect an exception to be raised, assert that with:
with self.assertRaises(ValueError):
foo(i)
If you expect no exception to be raised, use:
try:
foo(i)
except ValueError:
self.fail("foo() raised ValueEror unexpectedly!")
If anything, I'd suggest you write your own wrapper like:
self.assertEqualsAndCatch(foo, i, msg=...)
I.e.: pass a callback and its arguments, instead of a string to eval.
In Python, I would like to check the type of the arguments passed to a function.
I wrote two implementations:
class FooFloat(float):
pass
# Solution 1
def foo(foo_instance):
if type(foo_instance) is FooFloat:
raise TypeError, 'foo only accept FooFloat input'
# Solution 2
def foo(foo_instance):
assert type(foo_instance) is FooFloat, 'foo only accept FooFloat input'
In my opinion the latter is easier to read and less boilerplate. However it will throw an AssertionError which is not the type of error I would like to raise.
Is there a better third solution in this case more common?
I was thinking about a decorator:
#argtype('foo_instance', FooFloat)
def foo(foo_instance):
pass
I like this idea and thinking of using it in future. I implement the third solution as following, please have a try.
def argtype(arg_name, arg_type):
def wrap_func(func):
def wrap_args(*args, **kwargs):
if not isinstance(kwargs.get(arg_name), arg_type):
raise TypeError, '%s\'s argument %s should be %s type' % (func.__name__, arg_name, arg_type.__name__)
return func(*args, **kwargs)
return wrap_args
return wrap_func
#argtype('bar', int)
#argtype('foo', int)
def work(foo, bar):
print 'hello word'
work(foo='a', bar=1)
Besides, I think use isinstance is more suitable if there is inheritance.
isinstance() does this. It accepts the type and subtypes.
if not isinstance(arg,<required type>):
raise TypeError("arg: expected `%s', got `%s'"%(<required type>,type(arg))
After eliminating all duplication (DRY principle), this becomes:
(n,t)=('arg',<required_type>);o=locals()[n]
if not isinstance(o,t):
raise TypeError("%(n)s: expected `%(t)s', got `%(rt)s'"
%dict(locals(),rt=type(o)) # fine in this particular case.
# See http://stackoverflow.com/a/26853961/648265
# for other ways and limitations
del n,t,o
Personally, I would use assert instead unless I care about which exceptions it throws (which I typically don't - an invalid argument is a fatal error, so I'm only interested in the fact one was thrown):
assert isinstance(arg,<type>),"expected `%s',got `%s'"%(<type>,type(arg))
#arg name would be seen in the source in stacktrace
Also consider duck typing instead of explicit type checks (this includes checking for special members, e.g. __iter__ for iterables). Full "duck typing vs type checks" discussion is beyond the scope of the current topic, but it looks like explicit checks are more fit for highly-specialized and/or complex interfaces as opposed to simple and generic ones.
This is a problem that came up when performing a single test that had multiple independent failure modes, due to having multiple output streams. I also wanted to show the results of asserting the data on all those modes, regardless of which failed first. Python's unittest has no such feature outside of using a Suite to represent the single test, which was unacceptable since my single test always needed to be run as a single unit; it just doesn't capture the nature of the thing.
A practical example is testing an object that also generates a log. You want to assert the output of it's methods, but you also want to assert the log output. The two outputs require different tests, which can be neatly expressed as two of the stock asserts expressions, but you also don't want the failure of one to hide the possible failure of the other within the test. So you really need to test both at the same time.
I cobbled together this useful little widget to solve my problem.
def logFailures(fnList):
failurelog = []
for fn in fnList:
try:
fn()
except AssertionError as e:
failurelog.append("\nFailure %d: %s" % (len(failurelog)+1,str(e)))
if len(failurelog) != 0:
raise AssertionError(
"%d failures within test.\n %s" % (len(failurelog),"\n".join(failurelog))
)
Which is used like so:
def test__myTest():
# do some work here
logFailures([
lambda: assert_(False,"This test failed."),
lambda: assert_(False,"This test also failed."),
])
The result is that logFailures() will raise an exception that contains a log of all the assertions that were raised in methods within the list.
The question: While this does the job, I'm left wondering if there's a better way to handle this, other than having to go to the length of creating nested suites of tests and so forth?
With using a subtest, execution would not stop after the first failure
https://docs.python.org/3/library/unittest.html#subtests
Here is example with two fail asserts:
class TestMultipleAsserts(unittest.TestCase):
def test_multipleasserts(self):
with self.subTest():
self.assertEqual(1, 0)
with self.subTest():
self.assertEqual(2, 0)
Output will be:
======================================================================
FAIL: test_multipleasserts (__main__.TestMultipleAsserts) (<subtest>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./test.py", line 9, in test_multipleasserts
self.assertEqual(1, 0)
AssertionError: 1 != 0
======================================================================
FAIL: test_multipleasserts (__main__.TestMultipleAsserts) (<subtest>)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./test.py", line 11, in test_multipleasserts
self.assertEqual(2, 0)
AssertionError: 2 != 0
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=2)
You can easy wrap subtest as following
class MyTestCase(unittest.TestCase):
def expectEqual(self, first, second, msg=None):
with self.subTest():
self.assertEqual(first, second, msg)
class TestMA(MyTestCase):
def test_ma(self):
self.expectEqual(3, 0)
self.expectEqual(4, 0)
I disagree with the dominant opinion that one should write a test method for each assertion. There are situations where you want to check multiple things in one test method. Here is my answer for how to do it:
# Works with unittest in Python 2.7
class ExpectingTestCase(unittest.TestCase):
def run(self, result=None):
self._result = result
self._num_expectations = 0
super(ExpectingTestCase, self).run(result)
def _fail(self, failure):
try:
raise failure
except failure.__class__:
self._result.addFailure(self, sys.exc_info())
def expect_true(self, a, msg):
if not a:
self._fail(self.failureException(msg))
self._num_expectations += 1
def expect_equal(self, a, b, msg=''):
if a != b:
msg = '({}) Expected {} to equal {}. '.format(self._num_expectations, a, b) + msg
self._fail(self.failureException(msg))
self._num_expectations += 1
And here are some situations where I think it's useful and not risky:
1) When you want to test code for different sets of data. Here we have an add() function and I want to test it with a few example inputs. To write 3 test methods for the 3 data sets means repeating yourself which is bad. Especially if the call was more elaborate.:
class MyTest(ExpectingTestCase):
def test_multiple_inputs(self):
for a, b, expect in ([1,1,2], [0,0,0], [2,2,4]):
self.expect_equal(expect, add(a,b), 'inputs: {} {}'.format(a,b))
2) When you want to check multiple outputs of a function. I want to check each output but I don't want a first failure to mask out the other two.
class MyTest(ExpectingTestCase):
def test_things_with_no_side_effects(self):
a, b, c = myfunc()
self.expect_equal('first value', a)
self.expect_equal('second value', b)
self.expect_equal('third value', c)
3) Testing things with heavy setup costs. Tests must run quickly or people stop using them. Some tests require a db or network connection that takes a second which would really slow down your test. If you are testing the db connection itself, then you probably need to take the speed hit. But if you are testing something unrelated, we want to do the slow setup once for a whole set of checks.
This feels like over-engineering to me. Either:
Use two asserts in one test case. If the first assert fails, it's true, you won't know whether the second assert passed or not. But you're going to fix the code anyway, so fix it, and then you'll find out if the second assert passed.
Write two tests, one to check each condition. If you fear duplicated code in the tests, put the bulk of the code in a helper method that you call from the tests.
I have function defined this way:
def f1 (a, b, c = None, d = None):
.....
How do I check that a, b are not equal to some value. E.g. I want to check they are not empty strings like, "" or " "
Thinking about something like.
arguments = locals()
for item in arguments:
check_attribute(item, arguments[item])
And then check if arguments are not "", " ". But in this case it will also try to check None values (what I don't want to do).
A typical approach would be:
import sys
...
def check_attribute(name, value):
"""Gives warnings on stderr if the value is an empty or whitespace string.
All other values, including None, are OK and give no warning.
"""
if isinstance(value, basestring) and (not value or value.isspace()):
print>>sys.stderr, "Invalid value %r for argument %r" % (value, name)
or, of course, you could issue warnings, or raise exceptions if the problem is very serious according to your application's semantics.
One should probably delegate all of the checking to a single function, instead of looping in the function whose args you're checking (the latter would be sticking "checking code" smack in the middle of application logic -- better keep it out or the way...):
def check_arguments(d):
for name, value in d.iteritems():
check_attribute(name, value)
and the function would be just:
def f1 (a, b, c=None, d=None):
check_arguments(locals())
...
You could, alternatively, write a decorator in order to be able to code
#checked_arguments
def f1 (a, b, c=None, d=None):
...
(to get checking code even more "out of the way"), but this might be considered overkill unless you really have a lot of functions requiring exactly this kind of checks!
Argument-name introspection (while feasible, thanks to module inspect) is far less simple in a decorator than within the function itself, which is why my favorite design approach would be to eschew the decorator approach in this case (simplicity is seriously good;-).
Edit -- showing how to implement a decorator, since the OP explicitly asked for one (though without clarifying why).
The main problem (in Python 2.6 and earlier) is for the wrapper to construct a mapping equivalent to the locals() which Python makes for you, but needs to be done explicitly in a generic wrapper.
But -- if you use the new 2.7, inspect.getcallargs does it for you! So, the problem becomes much simpler, and the decorator perhaps worth doing in many more cases (if you're in 2.6 or earlier, I still recommend eschewing the decorator approach, which would be substantially more complicated, for such specialized uses).
So, here is all you need, in Python 2.7 (reusing the check_arguments function I defined above):
import functools
import inspect
def checked_arguments(f):
#functools.wraps(f)
def wrapper(*a, **k):
d = inspect.getcallargs(f, *a, **k)
check_arguments(d)
return f(*a, **k)
return wrapper
The difficulty in pre-2.7 versions comes entirely from the difficulty of implementing the equivalent of inspect.getcallargs -- so, I hope that, if you really need decorators of this kind, you can simply download Python 2.7 from www.python.org and install it on your box!-)
(If you do, you'll also get many more goodies besides, as well as a longer support cycle than just about any previous Python version, since 2.7 is slated to be the last release in the Python 2.* line).
Why can't you refer to the values by their names?
def f1 (a, b, c=None, d=None):
if not a.strip():
print('a is not empty')
If you have many arguments it is worth changing function signature to:
def f2 (*args, c=None, d=None):
for var in args:
if not var.strip():
raise ValueError('all elements should be non-empty')
for key, value in locals().items():
if value is not None:
check_attribute(key, value)
Though as others have said already, you can just check the arguments directly by name.