I'm trying to develop unit tests in Python using unittest from the Standard Library, and I want to try things out in the REPL. If I have the following code loaded
import unittest
class TestTrivial(unittest.TestCase):
def test_trivial(self):
self.assertEqual(1 + 1, 2)
self.assertFalse(2 + 2 == 5)
Then I can evaluate the following in the REPL:
unittest.main()
That spits out:
Ran 1 test in 0.000s
OK
Process Python finished
The problem is it killed the REPL. How do I run it without quitting the REPL?
Pass exit=False as an argument:
unittest.main(exit=False)
unittest.main isn't actually a function; it's just another name for unittest.main.TestProgram, whose __init__ method ultimately calls a runTests method, which ends with these lines:
if self.exit:
sys.exit(not self.result.wasSuccessful())
Setting self.exit to False prevents sys.exit from being called.
This is documented (see the end of the section), but not easily found.
Related
I can't figure out why without the line if __name__ == '__main__': before unittest.main() the test does not find?
I am using the latest version of PyCharm. I know that in order for the test to work in PyCharm, you can not add these lines at all, but I want to deal with the logic itself: why without the line if __name__ == '__main__': the result is as in the screenshot, but if you add it, then everything works?
Code:
import unittest
from name_function import get_formatted_name
class NamesTestCase(unittest.TestCase):
"""Tests for 'name_function.py'."""
def test_first_last_name(self):
"""Are names like 'Janis Joplin' working correctly?"""
formatted_name = get_formatted_name('janis', 'joplin')
self.assertEqual(formatted_name, 'Janis Joplin')
unittest.main()
There is only one function in the name_function module:
def get_formatted_name(first, last):
"""Builds a formatted full name."""
full_name = f"{first} {last}"
return full_name.title()
Result:
No tests were found
/Users/xxx/Documents/PycharmProjects/Book/venv/bin/python
"/Applications/PyCharm
CE.app/Contents/plugins/python-ce/helpers/pycharm/_jb_unittest_runner.py"
--path /Users/xxx/Documents/PycharmProjects/Book/testing.py
Testing started at 00:22 ...
-------------------------------------------------------------------> Ran 0 tests in 0.000s
OK
Launching unittests with arguments python -m unittest
/Users/xxx/Documents/PycharmProjects/Book/testing.py in
/Users/xxx/Documents/PycharmProjects/Book
Process finished with exit code 0
Empty suite
Empty suite
I am running the testing.py module as the main program, but judging by the answer line PyCharm is running the test via python -m unittest testing.NamesTestCase
I additionally checked the value of the global variable __name__ and indeed it has the value testing, as if testing was imported. Although I launch it initially.
Please explain why in this case the startup script differs from the standard one and when testing.py starts it runs it through unittest? I really want to finally understand this issue. Also don't understand why, in this case, if it initially runs through unittest, unittest.main() doesn't run normally without additional checking if __name__ == '__main__':?
Wow. I found out tonight that Python unit tests written using the unittest module don't play well with coverage analysis under the trace module. Here's the simplest possible unit test, in foobar.py:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
if __name__ == "__main__":
unittest.main()
If I run this with python foobar.py, I get this output:
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Great. Now I want to perform coverage testing as well, so I run it again with python -m trace --count -C . foobar.py, but now I get this:
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
No, Python, it's not OK - you didn't run my test! It seems as though running in the context of trace somehow gums up unittest's test detection mechanism. Here's the (insane) solution I came up with:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
class Insane(object):
pass
if __name__ == "__main__":
module = Insane()
for k, v in locals().items():
setattr(module, k, v)
unittest.main(module)
This is basically a workaround that reifies the abstract, unnameable name of the top-level module by faking up a copy of it. I can then pass that name to unittest.main() so as to sidestep whatever effect trace has on it. No need to show you the output; it looks just like the successful example above.
So, I have two questions:
What is going on here? Why does trace screw things up for unittest?
Is there an easier and/or less insane way to get around this problem?
A simpler workaround is to pass the name of the module explicitly to unittest.main:
import unittest
class Tester(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
if __name__ == "__main__":
unittest.main(module='foobar')
trace messes up test discovery in unittest because of how trace loads the module it is running. trace reads the module source code, compiles it, and executes it in a context with a __name__ global set to '__main__'. This is enough to make most modules behave as if they were called as the main module, but doesn't actually change the module which is registered as __main__ in the Python interpreter. When unittest asks for the __main__ module to scan for test cases, it actually gets the trace module called from the command line, which of course doesn't contain the unit tests.
coverage.py takes a different approach of actually replacing which module is called __main__ in sys.modules.
I don't know why trace doesn't work properly, but coverage.py does:
$ coverage run foobar.py
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
$ coverage report
Name Stmts Miss Cover
----------------------------
foobar 6 0 100%
I like Theran's answer but there were some catches with it, on Python 3.6 at least:
if I ran foobar.py that went fine, but if I ran foobar.py Sometestclass, to execute only Sometestclass, trace did not pick that up and ran all tests anyway.
My workaround was to specify defaultTest, when appropriate:
remember that unittest usually are run as
python foobar.py <-flags and options> <TestClass.testmethod> so targeted test is always the last arg, unless it's a unittest option, in which case it starts with -. or it's the foobar.py file itself.
lastarg = sys.argv[-1]
#not a flag, not foobar.py either...
if not lastarg.startswith("-") and not lastarg.endswith(".py"):
defaultTest = lastarg
else:
defaultTest = None
unittest.main(module=os.path.splitext(os.path.basename(__file__))[0], defaultTest=defaultTest)
anyway, now trace only executes the desired tests, or all of them if I don't specify otherwise.
I defined a very simple configuration manager (parses config files and verifies that certain keys exist in them) and now I'm writing some tests for it.
In many cases where the config file is is invalid, I want the configuration handler to call exit(). Is there some way I can write tests to ensure that exit was called and still continue testing? Can I perhaps, for a testing environment only, "mock" the exit function?
I am using Python 2.7 and unittest.
sys.exit() raises the SystemExit exception, which should be caught by the unittest framework. So you can check that SystemExit is raised for those test cases.
Example, exit_test.py:
import sys
import unittest
def func():
sys.exit()
class MyTest(unittest.TestCase):
def test_func(self):
self.assertRaises(SystemExit, func)
if __name__ == "__main__":
unittest.main()
Run:
$ python exit_test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
I want to use Python 3.3 with unit tests in small self-contained program, i.e. I don't want to split it up into a command line part and a "functional" part, which can be tested if it is started on itself on the command line.
So I have this little program:
import unittest
def stradd(a, b):
return a + b
class test_hello(unittest.TestCase):
def test_1(self):
self.assertEqual(stradd("a", "b"), "ab")
unittest.main()
print(stradd("Hello, ", "world"))
Unfortunately, the print() is never reached, since unittest.main() exits the program. And even if it would not exit, it would print all kinds of output to the screen that I don't want to see in normal operation.
Is there a way to run the tests silently, as long as there is no error? Of course, they should complain loudly if something doesn't work.
I've seen Run python unit tests as an option of the program, but that doesn't answer my question as well.
It is possible to achieve the effect you want with a plain unittest module. You just need to write your own simple test runner. Like this:
import unittest
def stradd(a, b):
return a + b
class test_hello(unittest.TestCase):
def test_1(self):
self.assertEqual(stradd("a", "b"), "ab")
def run_my_tests(test_case):
case = unittest.TestLoader().loadTestsFromTestCase(test_case)
result = unittest.TestResult()
case(result)
if result.wasSuccessful():
return True
else:
print("Some tests failed!")
for test, err in result.failures + result.errors:
print(test)
print(err)
return False
if run_my_tests(test_hello):
# All tests passed, so we can run our programm.
print(stradd("Hello, ", "world"))
run_my_tests function will return True if all tests pass successfully. But if there is a test failure, it will print all errors/failures to stdout. For example:
$ python myscript.py
Hello, world
$ # And now the test fails...
$ python myscript.py
Some tests failed!
test_1 (__main__.test_hello)
Traceback (most recent call last):
File "myscript.py", line 8, in test_1
self.assertEqual(stradd("a", "c"), "ab")
AssertionError: 'ac' != 'ab'
Just use python's nosetests or py.test. Then you can write the code exactly the way you want to - with nothing except test_ functions added to the program and run tests via
$ nosetests filename.py
or
$ py.test filename.py
Also yeah no need of classes then:
def test():
assert stradd("a", "b") == "ab"
Though it doesn't answer you "run silently" part. For me this + command line history works fine. For tiny programs ofc, basically snippets.
Other test frameworks won't help here because it's not test framework which is an issue here. Having said that py.test is the best one out there :).
The problem is unittest.main() function is designed specifically to run tests in a standard way and does not offer a way to customize this process in any way. This leaves us with two options
Use subprocess for running tests in separate process using unittest.main(), check the
output and continue with running our program if all tests passed
Leave high level unittest.main() alone and use other facilities
provided by unittest module
I'll write about both of these options as soon as I find some more free time.
I am using unittest to test my Flask application, and nose to actually run the tests.
My first set of tests is to ensure the testing environment is clean and prevent running the tests on the Flask app's configured database. I'm confident that I've set up the test environment cleanly, but I'd like some assurance of that without running all the tests.
import unittest
class MyTestCase(unittest.TestCase):
def setUp(self):
# set some stuff up
pass
def tearDown(self):
# do the teardown
pass
class TestEnvironmentTest(MyTestCase):
def test_environment_is_clean(self):
# A failing test
assert 0 == 1
class SomeOtherTest(MyTestCase):
def test_foo(self):
# A passing test
assert 1 == 1
I'd like the TestEnvironmentTest to cause unittest or noseto bail if it fails, and prevent SomeOtherTest and any further tests from running. Is there some built-in method of doing so in either unittest (preferred) or nose that allows for that?
In order to get one test to execute first and only halt execution of the other tests in case of an error with that test, you'll need to put a call to the test in setUp() (because python does not guarantee test order) and then fail or skip the rest on failure.
I like skipTest() because it actually doesn't run the other tests whereas raising an exception seems to still attempt to run the tests.
def setUp(self):
# set some stuff up
self.environment_is_clean()
def environment_is_clean(self):
try:
# A failing test
assert 0 == 1
except AssertionError:
self.skipTest("Test environment is not clean!")
For your use case there's setUpModule() function:
If an exception is raised in a setUpModule then none of the tests in
the module will be run and the tearDownModule will not be run. If
the exception is a SkipTest exception then the module will be
reported as having been skipped instead of as an error.
Test your environment inside this function.
You can skip entire test cases by calling skipTest() in setUp(). This is a new feature in Python 2.7. Instead of failing the tests, it will simply skip them all.
I'm not quite sure whether it fits your needs, but you can make the execution of a second suite of unittests conditional on the result of a first suite of unittests:
envsuite = unittest.TestSuite()
moretests = unittest.TestSuite()
# fill suites with test cases ...
envresult = unittest.TextTestRunner().run(envsuite)
if envresult.wasSuccessful():
unittest.TextTestRunner().run(moretests)