unittest.py doesn't play well with trace.py - why? - python

Wow. I found out tonight that Python unit tests written using the unittest module don't play well with coverage analysis under the trace module. Here's the simplest possible unit test, in foobar.py:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
if __name__ == "__main__":
unittest.main()
If I run this with python foobar.py, I get this output:
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Great. Now I want to perform coverage testing as well, so I run it again with python -m trace --count -C . foobar.py, but now I get this:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
No, Python, it's not OK - you didn't run my test! It seems as though running in the context of trace somehow gums up unittest's test detection mechanism. Here's the (insane) solution I came up with:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
class Insane(object):
pass
if __name__ == "__main__":
module = Insane()
for k, v in locals().items():
setattr(module, k, v)
unittest.main(module)
This is basically a workaround that reifies the abstract, unnameable name of the top-level module by faking up a copy of it. I can then pass that name to unittest.main() so as to sidestep whatever effect trace has on it. No need to show you the output; it looks just like the successful example above.
So, I have two questions:
What is going on here? Why does trace screw things up for unittest?
Is there an easier and/or less insane way to get around this problem?

A simpler workaround is to pass the name of the module explicitly to unittest.main:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
if __name__ == "__main__":
unittest.main(module='foobar')
trace messes up test discovery in unittest because of how trace loads the module it is running. trace reads the module source code, compiles it, and executes it in a context with a __name__ global set to '__main__'. This is enough to make most modules behave as if they were called as the main module, but doesn't actually change the module which is registered as __main__ in the Python interpreter. When unittest asks for the __main__ module to scan for test cases, it actually gets the trace module called from the command line, which of course doesn't contain the unit tests.
coverage.py takes a different approach of actually replacing which module is called __main__ in sys.modules.

I don't know why trace doesn't work properly, but coverage.py does:
$ coverage run foobar.py
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
$ coverage report
Name Stmts Miss Cover
----------------------------
foobar 6 0 100%

I like Theran's answer but there were some catches with it, on Python 3.6 at least:
if I ran foobar.py that went fine, but if I ran foobar.py Sometestclass, to execute only Sometestclass, trace did not pick that up and ran all tests anyway.
My workaround was to specify defaultTest, when appropriate:
remember that unittest usually are run as
python foobar.py <-flags and options> <TestClass.testmethod> so targeted test is always the last arg, unless it's a unittest option, in which case it starts with -. or it's the foobar.py file itself.
lastarg = sys.argv[-1]
#not a flag, not foobar.py either...
if not lastarg.startswith("-") and not lastarg.endswith(".py"):
defaultTest = lastarg
else:
defaultTest = None
unittest.main(module=os.path.splitext(os.path.basename(__file__))[0], defaultTest=defaultTest)
anyway, now trace only executes the desired tests, or all of them if I don't specify otherwise.

Related

Why doesn't the test work without if __name__ == '__main__'?

I can't figure out why without the line if __name__ == '__main__': before unittest.main() the test does not find?
I am using the latest version of PyCharm. I know that in order for the test to work in PyCharm, you can not add these lines at all, but I want to deal with the logic itself: why without the line if __name__ == '__main__': the result is as in the screenshot, but if you add it, then everything works?
Code:
import unittest
from name_function import get_formatted_name
class NamesTestCase(unittest.TestCase):
"""Tests for 'name_function.py'."""
def test_first_last_name(self):
"""Are names like 'Janis Joplin' working correctly?"""
formatted_name = get_formatted_name('janis', 'joplin')
self.assertEqual(formatted_name, 'Janis Joplin')
unittest.main()
There is only one function in the name_function module:
def get_formatted_name(first, last):
"""Builds a formatted full name."""
full_name = f"{first} {last}"
return full_name.title()
Result:
No tests were found
/Users/xxx/Documents/PycharmProjects/Book/venv/bin/python
"/Applications/PyCharm
CE.app/Contents/plugins/python-ce/helpers/pycharm/_jb_unittest_runner.py"
--path /Users/xxx/Documents/PycharmProjects/Book/testing.py
Testing started at 00:22 ...
-------------------------------------------------------------------> Ran 0 tests in 0.000s
OK
Launching unittests with arguments python -m unittest
/Users/xxx/Documents/PycharmProjects/Book/testing.py in
/Users/xxx/Documents/PycharmProjects/Book
Process finished with exit code 0
Empty suite
Empty suite
I am running the testing.py module as the main program, but judging by the answer line PyCharm is running the test via python -m unittest testing.NamesTestCase
I additionally checked the value of the global variable __name__ and indeed it has the value testing, as if testing was imported. Although I launch it initially.
Please explain why in this case the startup script differs from the standard one and when testing.py starts it runs it through unittest? I really want to finally understand this issue. Also don't understand why, in this case, if it initially runs through unittest, unittest.main() doesn't run normally without additional checking if __name__ == '__main__':?

Is it possible to run Robot Framework's unittests with Pytest?

Robot Framework has a great set of unit tests which are implemented using Python's unittest module. I wonder if these tests can be run with Pytest and if somebody has already tried to do so. At least Pytest's docu says that it can deal with regular Python unittest.
EDIT: To be more precise. I would like to run Robot's own over 1000 unit tests with Pytest instead of with Python's unittest module. E.g. now you have to run python run.py inside the utest folder of RF's repo to execute all unit tests. So what I am actually asking for is how to modify run.py so that it uses the Pytest framework instead of the unittest framework?
I think the most tricky part is here:
if __name__ == '__main__':
docs, vrbst = parse_args(sys.argv[1:])
tests = get_tests()
suite = unittest.TestSuite(tests)
runner = unittest.TextTestRunner(descriptions=docs, verbosity=vrbst)
result = runner.run(suite)
rc = len(result.failures) + len(result.errors)
if rc > 250:
rc = 250
sys.exit(rc)
especially:
suite = unittest.TestSuite(tests)
runner = unittest.TextTestRunner(descriptions=docs, verbosity=vrbst)
I already can run single test by pointing pytest to concrete test file, e.g. pytest utest/api/test_exposed_api.py. But if I try to run all unit test in utest folder with pytest utest/ I just get errors and warnings :(
Short answer: Yes it is! :)))
Long answer: I had to edit only two lines in run.py and was almost happy with the result
add import pytest below import unittest
edit the if __name__ == '__main__': section so that it looks like that:
if __name__ == '__main__':
docs, vrbst = parse_args(sys.argv[1:])
tests = get_tests()
pytest.main()
Then in utest folder just call python run.py and the tests run :)))
1532 passed, 45 warnings, 1 error in 9.70 seconds
Obviously there is more stuff which is not necessary for pytest in run.py and could be wiped out ... but me no python expert (yet)

Is it possible to run all unit test?

I have two module with two different classes and their corresponding test classes.
foo.py
------
class foo(object):
def fooMethod(self):
// smthg
bar.py
------
class bar(object):
def barMethod(self):
// smthg
fooTest.py
------
class fooTest(unittest.TestCase):
def fooMethodTest(self):
// smthg
barTest.py
------
class barTest(unittest.TestCase):
def barMethodTest(self):
// smthg
In any, test and source module, file, I erased the if __name__ == "__main__": because of increasing coherency and obeying object-oriented ideology.
Like in Java unit test, I'm looking for creating a module to run all unittest. For example,
runAllTest.py
-------------
class runAllTest(unittest.TestCase):
?????
if __name__ == "__main__":
?????
I looked for search engine but didn't find any tutorial or example. Is it possible to do so? Why? or How?
Note: I'm using eclipse and pydev distribution on windows machine.
When running unit tests based on the built-in python unittest module, at the root level of your project run
python -m unittest discover <module_name>
For the specific example above, it suffices to run
python -m unittest discover .
https://docs.python.org/2/library/unittest.html
You could create a TestSuite and run all your tests in it's if __name__ == '__main__' block:
import unittest
def create_suite():
test_suite = unittest.TestSuite()
test_suite.addTest(fooTest())
test_suite.addTest(barTest())
return test_suite
if __name__ == '__main__':
suite = create_suite()
runner=unittest.TextTestRunner()
runner.run(suite)
If you do not want to create the test cases manually look at this quesiton/answer, which basically creates the test cases dynamically, or use some of the features of the unittest module like test discovery feature and command line options ..
I think what you are looking for is the TestLoader. With this you can load specific tests or modules or load everything under a given directory. Also, this post has some useful examples using a TestSuite instance.
EDIT: The code I usually have in my test.py:
if not popts.tests:
suite = unittest.TestLoader().discover(os.path.dirname(__file__)+'/tests')
#print(suite._tests)
# Print outline
lg.info(' * Going for Interactive net tests = '+str(not tvars.NOINTERACTIVE))
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
else:
lg.info(' * Running specific tests')
suite = unittest.TestSuite()
# Load standard tests
for t in popts.tests:
test = unittest.TestLoader().loadTestsFromName("tests."+t)
suite.addTest(test)
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
Does two things:
If -t flag (tests) is not present, find and load all tests in directory
Else, load the requested tests one-by-one
I think you could just run the following command under the folder where your tests files are located:
python -m unittest
as mentioned here in the doc that "when executed without arguments Test Discovery is started"
With PyDev right click on a folder in Eclipse and choose "Run as-> Python unit-test". This will run all tests in that folder (the names of the test files and methods have to start with "test_".)
You are looking for nosetests.
You might need to rename your files; I'm not sure about the pattern nose uses to find the test files but, personally, I use *_test.py. It is possible to specify a custom pattern which your project uses for test filenames but I remember being unable to make it work so I ended up renaming my tests instead.
You also need to follow PEP 328 conventions to work with nose. I don't use IDEs with Python but your IDE may already follow it---just read the PEP and check.
With a PEP 328 directory/package structure, you can run individual tests as
nosetests path.to.class_test
Note that instead of the usual directory separators (/ or \), I used dots.
To run all tests, simply invoke nosetests at the root of your project.

python unittest with coverage report on (sub)processes

I'm using nose to run my "unittest" tests and have nose-cov to include coverage reports. These all work fine, but part of my tests require running some code as a multiprocessing.Process. The nose-cov docs state that it can do multiprocessing, but I'm not sure how to get that to work.
I'm just running tests by running nosetests and using the following .coveragerc:
[run]
branch = True
parallel = True
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
#if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
def __main__\(\):
omit =
mainserver/tests/*
EDIT:
I fixed the parallel switch in my ".coveragerc" file. I've also tried adding a sitecustomize.py like so in my site-packages directory:
import os
import coverage
os.environ['COVERAGE_PROCESS_START']='/sites/metrics_dev/.coveragerc'
coverage.process_startup()
I'm pretty sure it's still not working properly, though, because the "missing" report still shows lines that I know are running (they output to the console). I've also tried adding the environment variable in my test case file and also in the shell before running the test cases. I also tried explicitly calling the same things in the function that's called by multiprocessing.Process to start the new process.
First, the configuration setting you need is parallel, not parallel-mode. Second, you probably need to follow the directions in the Measuring Subprocesses section of the coverage.py docs.
Another thing to consider is if you see more than one coverage file while running coverage. Maybe it's only a matter of combining them afterwards.
tl;dr — to use coverage + nosetests + nose’s --processes option, set coverage’s --concurrency option to multiprocessing, preferably in either .coveragerc or setup.cfg rather than on the command-line (see also: command line usage and configuration files).
Long version…
I also fought this for a while, having followed the documentation on Configuring Python for sub-process coverage to the letter. Finally, upon re-examining the output of coverage run --help a bit more closely, I stumbled across the --concurrency=multiprocessing option, which I’d never used before, and which seems to be the missing link. (In hindsight, this makes sense: nose’s --processing option uses the multiprocessing library under the hood.)
Here is a minimal configuration that works as expected:
unit.py:
def is_even(x):
if x % 2 == 0:
return True
else:
return False
test.py:
import time
from unittest import TestCase
import unit
class TestIsEvenTrue(TestCase):
def test_is_even_true(self):
time.sleep(1) # verify multiprocessing is being used
self.assertTrue(unit.is_even(2))
# use a separate class to encourage nose to use a separate process for this
class TestIsEvenFalse(TestCase):
def test_is_even_false(self):
time.sleep(1)
self.assertFalse(unit.is_even(1))
setup.cfg:
[nosetests]
processes = 2
verbosity = 2
[coverage:run]
branch = True
concurrency = multiprocessing
parallel = True
source = unit
sitecustomize.py (note: located in site-packages)
import os
try:
import coverage
os.environ['COVERAGE_PROCESS_START'] = 'setup.cfg'
coverage.process_startup()
except ImportError:
pass
$ coverage run $(command -v nosetests)
test_is_even_false (test.TestIsEvenFalse) ... ok
test_is_even_true (test.TestIsEvenTrue) ... ok
----------------------------------------------------------------------
Ran 2 tests in 1.085s
OK
$ coverage combine && coverage report
Name Stmts Miss Branch BrPart Cover
-------------------------------------------
unit.py 4 0 2 0 100%

Passing python script arguments to test modules

I have several test modules that are all invoked together via a driver script that can take a variety of arguments. The tests themselves are written using the python unittest module.
import optparse
import unittest
import sys
import os
from tests import testvalidator
from tests import testmodifier
from tests import testimporter
#modify the path so that the test modules under /tests have access to the project root
sys.path.insert(0, os.path.dirname(__file__))
def run(verbosity):
if verbosity == "0":
sys.stdout = open(os.devnull, 'w')
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(testvalidator.TestValidator))
test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(testmodifier.TestModifier))
test_suite.addTest(unittest.TestLoader().loadTestsFromTestCase(testimporter.TestDataImporter))
unittest.TextTestRunner(verbosity=int(verbosity)).run(test_suite)
if __name__ == "__main__":
#a simple way to control output verbosity
parser = optparse.OptionParser()
parser.add_option("--verbosity", "--verbosity", dest="verbosity", default="0")
(options, args) = parser.parse_args()
run(options.verbosity)
My issue is that, within these test modules, I have certain tests I'd like to skip based on different parameters passed to the driver. I'm aware that unittest provides a family of decorators meant to do this, but I don't know the best way to pass this information on to the individual modules. If I had a --skip-slow argument, for example, how could I then annotate tests as slow, and have them skipped?
Thank you for your time.
I had in fact been wondering this myself, and finally found the solution.
main file...
...
if __name__ == '__main__':
args = argparser()
from tests import *
...
And in your test modules, just do:
from __main__ import args
print args
I tested this out, and it worked rather nicely. Nice thing is how simple it is, and it's not too much of a hack at all.
You can use nose test runner with the attrib plugin that lets you select test cases based on attributes. In particular, the example in the plugin documentation uses #attr(slow) to mark slow test cases.
After that, from the command line:
To select all the test cases marked as slow:
$ nosetests -a slow
To select all the test cases not marked as slow:
$ nosetests -a '!slow'
Here's how I solved this problem. At the bottom of my module, I put this code to set a global variable based on the presence of a --slow argument in argv:
if __name__ == "__main__":
try:
i = sys.argv.index("--slow")
run_slow_tests=True
del sys.argv[i]
except ValueError:
pass
unittest.main()
Then at the beginning of test functions which would be slow to run, I put this statement. It raises the unittest.SkipTest() exception if the flag isn't set to include slow tests.
if not run_slow_tests:
raise unittest.SkipTest('Slow test skipped, unless --slow given in sys.argv.')
Then when I invoke the module normally, the slow tests are skipped.
% python src/my_test.py -v
test_slow (__main__.Tests) ... skipped 'Slow test skipped, unless --slow given in sys.argv.'
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
And when I add the --slow, the slow tests in that module run:
% python src/my_test.py -v --slow
test_slow (__main__.Tests) ... ok
----------------------------------------------------------------------
Ran 1 test in 10.110s
OK
Unfortunately, this doesn't work with Unittest's test discovery.
% python -m unittest discover src "*_test.py" --slow
usage: python -m unittest discover [-h] [-v] [-q] [--locals] [-f] [-c] [-b]
[-k TESTNAMEPATTERNS] [-s START]
[-p PATTERN] [-t TOP]
python -m unittest discover: error: unrecognized arguments: --slow
It also didn't work to use the #unittest.SkipUnless() decorator. I suspect this is because the decorator evaluates its arguments at module definition time, but the argument isn't set to the correct value until module run time, which is later.
It isn't perfect, but it lets me work within the Python standard library. A requirement like this is a good reason to adopt a better framework, such as nose tests. For my current project, I prefer to avoid installing any outside modules.

Categories