python 2.6. unittest framework, asserts: help required - python

I am writing a test suite in python 2.6 using the unittest framework, and I want to use asserts in my code. I know that asserts got a complete overhaul and are much nicer in 2.7+ but I am confined to using 2.6 for now.
I am having problems using asserts. I want to be able to use the assertIn(a,b) feature, but alas, that is only in 2.7+. So I realized I must use the assertTrue(x) which is also in 2.6, but that didn't work. Then, I looked at this document which says that in previous versions assertTrue(x) used to be failUnless(x), so I used that in my code, and still no results.
I get the message:
NameError: global name 'failUnless' is not defined
which is the same thing I got for assertIn(a,b) and for assertTrue(x).
So I am totally at a loss for what I should do.
shorter version of my problem:
I want to be able to implement assertIn(a,b) in python 2.6.
Anyone have any solutions to this?
my code:
import unittest
class test_base(unittest.TestCase):
# some functions that are used by many tests
class test_01(test_base):
def setUp(self):
#set up code
def tearDown(self):
#tear down code
def test_01001_something(self):
#gets a return value of a function
ret = do_something()
#here i want to check if foo is in ret
failUnless("foo" in ret)
edit: Seems I am an idiot. All I needed to do was add self.assert.... and it worked.

import unittest
class MyTest(unittest.TestCase):
def test_example(self):
self.assertTrue(x)
This should work, based on the docs for unittest from Python 2.6. Be sure to use it as TestCase.assertTrue().
edit: In your example, set it as self.failUnless("foo" in ret) and it should work.

assertTrue should work just fine for an in test:
self.assertTrue('a' in somesequence)
All assertIn does is run the same test as above and set a helpful message if the test fails.

Your code for test case really helped.
Your problem is that you're trying to use assert[Something] as functions, while they're methods of TestCase class.
So you can solve your problem with, e.g. assertTrue:
self.assertTrue(element in list_object)

Actually implementing assertIn is pretty trivial. This is what I've used in my unit tests:
class MyTestCase(unittest.TestCase)
def assertIn(self, item, iterable):
self.assertTrue(item in iterable,
msg="{item} not found in {iterable}"
.format(item=item,
iterable=iterable))
You can then base all your testcases on this class instead unittest.TestCase and safely use assertIn even on python 2.6 and the error message will be much better than pure assertTrue. For comparison actual implementation of assertIn from Python 2.7:
def assertIn(self, member, container, msg=None):
"""Just like self.assertTrue(a in b), but with a nicer default message."""
if member not in container:
standardMsg = '%s not found in %s' % (safe_repr(member),
safe_repr(container))
self.fail(self._formatMessage(msg, standardMsg))

Related

Mocking decorator with that uses hardcoded global variable

When trying to unittest the below seen code snippet i get limited by the timing limit that the decorator that wraps calc_something functions puts to me. It seems that I cant override RAND_RATE on my unittests since then I import the module containing my implementation the decorators have already wrapped my function. How can I solve that issue?
RAND_RATE=20
RAND_PERIOD=10
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Without knowing exactly what limits does, we don't know what (if anything) can be patched. Instead, leave the base implementation undecorated for use by unit test. calc_something will be saved as the separate result of applying limits manually.
RAND_RATE=20
RAND_PERIOD=10
def _do_calc():
...
calc_something = limits(calls=RAND_RATE, period=RAND_PERIOD)(_do_calc)
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Now in your tests, you can define any decorated version you like:
test_me = limits(10, 5)(my_module._do_calc)

What is the best way to mock os.system for unit test (PyTest)

I have a Python script that does multiple os.system calls. Asserting against the series of them as a list of strings will be easy (and relatively elegant).
What isn't so easy is intercepting (and blocking) the actual calls. In the script in question, I could abstract os.system in the SUT (*) like so:
os_system = None
def main():
return do_the_thing(os.system)
def do_the_thing(os_sys):
global os_system
os_system = os_sys
# all other function should use os_system instead of os.system
My test invokes my_script.do_the_thing() instead of my_script.main() of course (leaving a tiny amount of untested code).
Alternate option: I could leave the SUT untouched and replace os.system globally in the test method before invoking main() in the SUT.
That leaves me with new problems in that that's a global and lasting change. Fine, so I'd use a try/finally in the same test method, and replace the original before leaving the test method. That'd work whether the test method passes or fails.
Is there a safe and elegant setup/teardown centric way of doing this for PyTest, though?
Additional complications: I want to do the same for stdout and stderr. Yes, it really is a main() script that I am testing.
SUT == System Under Test
The Python 3 (>= 3.3) standard library has a great tutorial about Mock in the official documentation. For Python 2, you can use the backported library: Mock on PyPi.
Here is a sample usage. Say you want to mock the call to os.system in this function:
import os
def my_function(src_dir):
os.system('ls ' + src_dir)
To do that, you can use the unittest.mock.patch decorator, like this:
import unittest.mock
#unittest.mock.patch('os.system')
def test_my_function(os_system):
# type: (unittest.mock.Mock) -> None
my_function("/path/to/dir")
os_system.assert_called_once_with('ls /path/to/dir')
This test function will patch the os.system call during its execution. os.system is restored at the end.
Then, there are several "assert" method to check the calls, the parameters, and the results. You can also check that an exception is raised in certain circonstances.
Just want to add an important detail.
If your code uses import system for example like this:
myls.py:
import os
def do_ls():
os.system('ls')
Then the patch in your test should look like this:
test_myls.py:
from unittest.mock import patch
#patch('os.system')
def test_do_ls(mock_system):
do_ls()
mock_system.assert_called()
However, if the code uses from os import system, e.g. like this:
myls.py:
from os import system
def do_ls():
system('ls')
Then the patch in your test should look like this:
test_myls.py:
from unittest.mock import patch
#patch('myls.system')
def test_do_ls(mock_system):
do_ls()
mock_system.assert_called()
This eluded me for a bit because I had forgotten to read the section on where to patch as I originally intended. If patching does not seem to work, this is one of the points to look at.

Is there a way to override default assert in pytest (python)?

I'd like to a log some information to a file/database every time assert is invoked. Is there a way to override assert or register some sort of callback function to do this, every time assert is invoked?
Regards
Sharad
Try overload the AssertionError instead of assert. The original assertion error is available in exceptions module in python2 and builtins module in python3.
import exceptions
class AssertionError:
def __init__(self, *args, **kwargs):
print("Log me!")
raise exceptions.AssertionError
I don't think that would be possible. assert is a statement (and not a function) in Python and has a predefined behavior. It's a language element and cannot just be modified. Changing the language cannot be the solution to a problem. Problem has to be solved using what is provided by the language
There is one thing you can do though. Assert will raise AssertionError exception on failure. This can be exploited to get the job done. Place the assert statement in Try-expect block and do your callbacks inside that block. It isn't as good a solution as you are looking for. You have to do this with every assert. Modifying a statement's behavior is something one won't do.
It is possible, because pytest is actually re-writing assert expressions in some cases. I do not know how to do it or how easy it is, but here is the documentation explaining when assert re-writing occurs in pytest:
https://docs.pytest.org/en/latest/assert.html
By default, if the Python version is greater than or equal to 2.6, py.test rewrites assert statements in test modules.
...
py.test rewrites test modules on import. It does this by using an
import hook to write a new pyc files.
Theoretically, you could look at the pytest code to see how they do it, and perhaps do something similar.
For further information, Benjamin Peterson wrote up Behind the scenes of py.test’s new assertion rewriting [ at http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html ]
I suggest to use pyhamcrest. It has very beatiful matchers which can be simply reimplemented. Also you can write your own.

How to properly extend other classes in Python? (python v3.3)

I've started to learn python in the past few days, and while exploring object-oriented programming I'm running into problems.
I'm using Eclipse while running the pydev plugin, am running on the python 3.3 beta, and am using a windows 64 bit system.
I can initialize a class fine and use any methods within it, as long as I'm not trying to extend the superclass (each class I've coded in a different source file)
For example, the following code compiles and runs fine.
class pythonSuper:
string1 = "hello"
def printS():
print pythonSuper.string1
and the code to access and run it...
from stackoverflow.questions import pythonSuper
class pythonSub:
pysuper = pythonSuper.pythonSuper()
pysuper.printS()
Like I said, that works. The following code doesn't
class pythonSuper: """Same superclass as above. unmodified, except for the spacing"""
string1 = "hello"
def printS(self):
print(pythonSuper.string1)
Well, that's not quite true. The superclass is absolutely fine, at least to my knowledge. It's the subclass that weirds out
from stackoverflow.questions import pythonSuper
class pythonSub(pythonSuper):
pass
pythonObject = pythonSub()
pythonSub.pythonSuper.printS()
when the subclass is run Eclipse prints out this error
Traceback (most recent call last):
File "C:\Users\Anish\workspace\Python 3.3\stackoverflow\questions\pythonSub.py",
line 7, in <module>
class pythonSub(pythonSuper):
TypeError: module.__init__() takes at most 2 arguments (3 given)
I have no idea what's going on. I've been learning python from thenewboston's tutorials, but those are outdated (I think his tutorial code uses python version 2.7). He also codes in IDLE, which means that his classes are all contained in one file. Mine, however, are all coded in files of their own. That means I have no idea whether the code errors I'm getting are the result of outdated syntax or my lack of knowledge on this language. But I digress. If anyone could post back with a solution and/or explanation of why the code is going wrong and what I could do to fix it. An explanation would be preferred. I'd rather know what I'm doing wrong so I can avoid and fix the problem in similar situations than just copy and paste some code and see that it works.
Thanks, and I look forward to your answers
I ran your code, albeit with a few modifications and it runs perfectly. Here is my code:
pythonSuper:
class pythonSuper:
string1 = 'hello'
def printS(self):
print(self.string1)
main:
from pythonSuper import pythonSuper as pySuper
class pythonSub(pySuper):
pass
pythonObject = pythonSub()
pythonObject.printS()
NOTE: The change I have made to your code is the following:
In your code, you have written pythonSub.pythonSuper.printS() which is not correct, because via pythonSub you already support a printS() method, directly inherited from the superclass. So there is no need to refer to the superclass explicitly in that statement. The statement that I used to substitute the aforementioned one, pythonObject.printS(), seems to have addressed this issue.
pythonSuper refers to the module, not the class.
class pythonSub(pythonSuper.pythonSuper):
pass

Giving parameters into TestCase from Suite in python

From python documentation(http://docs.python.org/library/unittest.html):
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
self.widget = None
def test_default_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
Here is, how invoke those testcase:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_size'))
suite.addTest(WidgetTestCase('test_resize'))
return suite
Is it possible to insert parameter custom_parameter into WidgetTestCase like:
class WidgetTestCase(unittest.TestCase):
def setUp(self,custom_parameter):
self.widget = Widget('The widget')
self.custom_parameter=custom_parameter
?
What I've done is in test_suite module just added
WidgetTestCase.CustomParameter="some_address"
The simplest solutions are the best :)
I've found a way to do this, but it's a bit of a cludge.
Basically, what I do is add, to the TestCase, an __init__ method which defines a 'default' parameter and a __str__ so that we can distinguish cases:
class WidgetTestCase(unittest.TestCase):
def __init__(self, methodName='runTest'):
self.parameter = default_parameter
unittest.TestCase.__init__(self, methodName)
def __str__(self):
''' Override this so that we know which instance it is '''
return "%s(%s) (%s)" % (self._testMethodName, self.currentTest, unittest._strclass(self.__class__))
Then in suite(), I iterate over my test parameters, replacing the default parameter with one specific to each test:
def suite():
suite = unittest.TestSuite()
for test_parameter in test_parameters:
loadedtests = unittest.TestLoader().loadTestsFromTestCase(WidgetTestCase)
for t in loadedtests:
t.parameter = test_parameter
suite.addTests(loadedtests)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(OtherWidgetTestCases))
return suite
where OtherWidgetTestCases are tests which don't need to be parameterised.
For instance I have a bunch of tests on real data for which a suite of tests need to be applied to each, but I also have some synthetic data sets, designed to test certain edge cases not normally present in the data, and I only need to apply certain tests to those, so they get their own tests in OtherWidgetTestCases.
This is something that has been on my mind recently. Yes it is very possible to do. I called it scenario testing, but I think parameterized may be more accurate. I put a proof of concept up as a gist here. In short it is a meta class that allows you to define a scenario and run the tests against it a bunch. With it your example can be something like this:
class WidgetTestCase(unittest.TestCase):
__metaclass__ = ScenarioMeta
class widget_width(ScenerioTest):
scenarios = [
dict(widget_in=Widget("One Way"), expected_tuple=(50, 50)),
dict(widget_in=Widget("Another Way"), expected_tuple=(100, 150))
]
def __test__(self, widget_in, expected_tuple):
self.assertEqual(widget_in.size, expected_tuple)
When run, the meta class writes 2 seperate tests out so the output would be something like:
$ python myscerariotest.py -v
test_widget_width_0 (__main__.widget_width) ... ok
test_widget_width_1 (__main__.widget_width) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
As you can see the scenarios are converted to tests at runtime.
Now I am not yet sure if this is even a good idea. I use it in tests where I have a lot of text centric cases that repeat the same assertions on slightly different data, which helps me to catch the little edge cases. But the classes in that gist do work and I believe it accomplishes what you are after.
Note that the with some trickery the test cases can be given names and even pulled from an external source like a text file or database. Its not documented yet but some digging around in the meta class should get you started. There is also some more info and examples on my post here.
Edit
This is an ugly hack that I do not support anymore. The implementation should have been done as a subclass of TestCase, not as a hacked meta class. Live and learn. An even better solution would be to use nose generators.
I don't believe so, the signature for setUp needs to be what unittest is expecting, afaik, setUp is automagically called within the testcase's run method as setUp()... you're not going to be able to pass it unless you override run to pass in the var you want. But I think what you want defeats the purpose of unit testing. Don't try to use a DRY philosophy with this, each unit you're testing should be a part of a class or even part of a function/method.
I don't think this is a good idea. Unit tests should be thorough enough that you test all functionality in your cases so passing in different parameteres shouldn't be required.
You mention you're passing in a www address - this is almost certainly not a good idea. What happens if you try and run the tests on a machine where the 'net connection is down? Your tests should be:
Automatic - they will run on all machines and platforms where your app is supported, without user intervention. They shouldn't rely on external environment to pass. This means (amongst other things) that relying on a properly set up connection to the Internet is a bad idea. You can get around this by providing dummy data. Instead of passing in a URL to a resource, abstract away the data source and pass in a data-stream or whatever. This is especially easy in python since you can make use of python's duck-typing to present a stream-like object (python frequently uses a "file-like" object for this very reason!).
Thorough - your unit tests should have 100% code coverage, and cover all possible situations. You want to test your code with multiple sites? Instead, test your code with all the possible features that a site may include. Without knowing more about what your application does, I can't offer much advice in this point.
Now, it looks like you're tests are going to be heavily data-driven. There are many tools that allow you to define data-sets for unit tests and load them in the tests. Check out python test fixtures, for example.
I realise that this isn't the answer you're looking for, but I think you'll have more joy in the long-run if you follow these principles.

Categories