Python unittest report passed test - python

Hello I have a test module like the following under "test.py":
class TestBasic(unittest.TestCase):
def setUp(self):
# set up in here
class TestA(TestBasic):
def test_one(self):
self.assertEqual(1,1)
def test_two(self):
self.assertEqual(2,1)
if __name__ == "__main__":
unittest.main()
And this works pretty good, but I need a way to print which test passed, for example I could print the output to the console:
test_one: PASSED
test_two: FAILED
Now the twist, I could add a print statement right after the self.assertEqual() and that would be a passed test and I could just print it, but I need to run the test from a different module, let's say "test_reporter.py" where I have something like this:
import test
suite = unittest.TestLoader().loadTestsFromModule(test)
results = unittest.TextTestRunner(verbosity=0).run(suite)
at this point with results is when I build a report.
So please any suggestion is welcome
Thanks !!

Like Corey's comment mentioned, if you set verbosity=2 unittest will print the result of each test run.
results = unittest.TextTestRunner(verbosity=2).run(suite)
If you want a little more flexibility - and you might since you are creating suites and using test runners - I recommend that you take a look at Twisted Trial. It extends Python's unittest module and provides a few more assertions and reporting features.
Writing your tests will be exactly the same (besides subclassing twisted.trial.unittest.TestCase vs python's unittest) so your workflow won't change. You can still use your TestLoader but you'll have the options of many more TestReporters http://twistedmatrix.com/documents/11.1.0/api/twisted.trial.reporter.html.
For example, the default TestReporter is TreeReporter which returns the following output:

Related

#unittest.skip(reason) can't work on main function

I'd like to prevent/skip some tests cases during runinng others in python. I couldn't achive to use #unittest.skip(reason) on my case. It always generates a Script Error in python unittest.
My code;
import unittest
#unittest.skip("something")
def main():
try:
something = []
for _ in range(4):
test.log("something happened")
The result is;
Error Script Error
Detail: SkipTest: something
Do you have any idea about the issue?
Unittest is a module that act as testing framework. You should use it according to the framework practices (taken from tutorialspoint.com):
import unittest
def add(x,y):
return x+y
class SimpleTest(unittest.TestCase):
#unittest.skip("demonstrating skipping")
def testadd1(self):
self.assertEquals(add(4,5),9)
if __name__ == '__main__':
unittest.main()
Squish test scripts written in Python are unrelated to the unittest module. I cannot tell how the two could be used together.
Squish offers the test.skip() function for skipping test cases:
test.skip(message);
test.skip(message, detail);
This function skips further execution of the current script test case, or the current BDD scenario (or BDD scenario outline iteration), by throwing an exception and adding a SKIPPED entry to Squish's test log with the given message string, and with the given detail, if provided. If the test case logged verification failures before the skip, it is still considered failed.

Unit testing __main__.py

I have a Python package (Python 3.6, if it makes a difference) that I've designed to run as 'python -m package arguments' and I'd like to write unit tests for the __main__.py module. I specifically want to verify that it sets the exit code correctly. Is it possible to use runpy.run_module to execute my __main__.py and test the exit code? If so, how do I retrieve the exit code?
To be more clear, my __main__.py module is very simple. It just calls a function that has been extensively unit tested. But when I originally wrote __main__.py, I forgot to pass the result of that function to exit(), so I would like unit tests where the main function is mocked to make sure the exit code is set correctly. My unit test would look something like:
#patch('my_module.__main__.my_main', return_value=2)
def test_rc2(self, _):
"""Test that rc 2 is the exit code."""
sys.argv = ['arg0', 'arg1', 'arg2', …]
runpy.run_module('my_module')
self.assertEqual(mod_rc, 2)
My question is, how would I get what I’ve written here as ‘mod_rc’?
Thanks.
Misko Hevery has said before (I believe it was in Clean Code Talks: Don't Look for Things but I may be wrong) that he doesn't know how to effectively unit test main methods, so his solution is to make them so simple that you can prove logically that they work if you assume the correctness of the (unit-tested) code that they call.
For example, if you have a discrete, tested unit for parsing command line arguments; a library that does the actual work; and a discrete, tested unit for rendering the completed work into output, then a main method that calls all three of those in sequence is assuredly going to work.
With that architecture, you can basically get by with just one big system test that is expected to produce something other than the "default" output and it'll either crash (because you wired it up improperly) or work (because it's wired up properly and all of the individual parts work).
At this point, I'm dropping all pretense of knowing what I'm talking about. There is almost assuredly a better way to do this, but frankly you could just write a shell script:
python -m package args
test $? -eq [expected exit code]
That will exit with error iff your program outputs incorrectly, which TravisCI or similar will regard as build failing.
__main__.py is still subject to normal __main__ global behavior — which is to say, you can implement your __main__.py like so
def main():
# Your stuff
if __name__ == "__main__":
main()
and then you can test your __main__ in whatever testing framework you like by using
from your_package.__main__ import main
As an aside, if you are using argparse, you will probably want:
def main(arg_strings=None):
# …
args = parser.parse_args(arg_strings)
# …
if __name__ == "__main__":
main()
and then you can override arg strings from a unit test simply with
from your_package.__main__ import main
def test_main():
assert main(["x", "y", "z"]) == …
or similar idiom in you testing framework.
With pytest, I was able to do:
import mypkgname.__main__ as rtmain
where mypkgname is what you've named your app as a package/module. Then just running pytest as normal worked. I hope this helps some other poor soul.

Count subtests in Python unittests separately

Since version 3.4, Python supports a simple subtest syntax when writing unittests. A simple example could look like this:
import unittest
class NumbersTest(unittest.TestCase):
def test_successful(self):
"""A test with subtests that will all succeed."""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i, i)
if __name__ == '__main__':
unittest.main()
When running the tests, the output will be
python3 test_foo.py --verbose
test_successful (__main__.NumbersTest)
A test with subtests that will all succeed. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
However, in my real world use cases, the subtests will depend on a more complex iterable and check something which is very different for each subtest. Consequently, I would rather have each subtest counted and listed as a separated test case in the output (Ran 6 tests in ... in this example) to get the full picture.
Is this somehow possible with the plain unittest module in Python? The nose test generator feature would output each test separately but I would like to stay compatible with the standard library if possible.
You could subclass unittest.TestResult:
class NumbersTestResult(unittest.TestResult):
def addSubTest(self, test, subtest, outcome):
# handle failures calling base class
super(NumbersTestResult, self).addSubTest(test, subtest, outcome)
# add to total number of tests run
self.testsRun += 1
Then in NumbersTest override the run function:
def run(self, test_result=None):
return super(NumbersTest, self).run(NumbersTestResult())
Sorry I cannot test this in a fully working environment right now, but this should do the trick.
Using python 3.5.2, themiurge's answer didn't work out-of-the-box for me but a little tweaking got it to do what I wanted.
I had to specifically get the test runner to use this new class as follows:
if __name__ == '__main__':
unittest.main(testRunner=unittest.TextTestRunner(resultclass=NumbersTestResult))
However this didn't print the details of the test failures to the console as in the default case. To restore this behaviour I had to change the class NumbersTestResult inherited from to unittest.TextTestResult.
class NumbersTestResult(unittest.TextTestResult):
def addSubTest(self, test, subtest, outcome):
# handle failures calling base class
super(NumbersTestResult, self).addSubTest(test, subtest, outcome)
# add to total number of tests run
self.testsRun += 1

Controlling which tests run using pytest

I'm considering converting some unittest.TestCase tests into Pytest ones to take advantage of Pytest's fixtures. One feature of unittest that I wasn't able to easily find the equivalent of in Pytest, however, is the ability to create testing suites and run them. I currently often do something like this:
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
suite = unittest.TestSuite()
# suite.addTest(TestSomething('test_1'))
suite.addTest(TestSomething('test_2'))
runner = unittest.TextTestRunner()
runner.run(suite)
By commenting in and out the lines with addTest, I can easily select which tests to run. How would I do something similar with Pytest?
You can use the -k argument to run specific tests. For example
# put this in test.py
import unittest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
Running all tests in the class TestSomething can be done like this:
py.test test.py -k TestSomething
Running only test_2:
py.test test.py -k "TestSomething and test_2"
More examples in the documentation
Another way to go is to use special test names. These can be configures in the pytest.ini file.
# content of pytest.ini
# can also be defined in tox.ini or setup.cfg file, although the section
# name in setup.cfg files should be "tool:pytest"
[pytest]
python_files=check_*.py
python_classes=Check
python_functions=*_check
Another way is to take action in conftest.py. In this example the collect_ignore config variable is used. It is a list of test paths that are to be ignored. In this example test_somthing.py is always ignored for collection. test_other_module_py2.py is ignored if we are testing with a python 3.
# content of conftest.py
import sys
collect_ignore = ["test_something/test_something.py"]
if sys.version_info[0] > 2:
collect_ignore.append("test_other/test_other_module_py2.py")
Since pytest 2.6 it is also possible to omit classes from test registration like this:
# Will not be discovered as a test
class TestClass:
__test__ = False
These examples were loosely taken from the documentation of pytest chapter Changing standard (Python) test discovery
In addition to using -k filters, you can name specific test classes or cases you want to run,
py.test test.py::TestSomething::test_2
Would run just test_2
Think the best way to do this is to use custom pytest markers.
You should mark specific tests (which you want to run) with
#pytest.mark.mymarkername
And run only tests with the custom marker using command:
py.test -v -m mymarkername
Here you can find more info regarding markers:
http://doc.pytest.org/en/latest/example/markers.html
Building on mbatchkarov's answer, since the names of my tests can get quite lengthy, I would like to still be able to select tests by commenting in and out lines and hitting "Cntrl+B" in Sublime (or "Cntrl+R" using the Atom Runner). One way to do this is as follows:
import unittest
import pytest
class TestSomething(unittest.TestCase):
def test_1(self):
self.assertEqual("hello".upper(), "HELLO")
def test_2(self):
self.assertEqual(1+1, 2)
if __name__ == "__main__":
tests_to_run = []
# tests_to_run.append('TestSomething and test_1')
tests_to_run.append('TestSomething and test_2')
tests_to_run = " or ".join(tests_to_run)
args = [__file__, '-k', tests_to_run]
pytest.main(args)
The idea behind this is that because Pytest accepts a string expression to match tests (rather than just a list of tests), one must generate a list of expressions matching one test only, and concatenate them using or.

how to generate unit test code for methods

i want to write code for unit test to test my application code. I have different methods and now want to test these methods one by one in python script.
but i do not how to i write. can any one give me example of small code for unit testing in python.
i am thankful
Read the unit testing framework section of the Python Library Reference.
A basic example from the documentation:
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
def testshuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
def testchoice(self):
element = random.choice(self.seq)
self.assert_(element in self.seq)
def testsample(self):
self.assertRaises(ValueError, random.sample, self.seq, 20)
for element in random.sample(self.seq, 5):
self.assert_(element in self.seq)
if __name__ == '__main__':
unittest.main()
It's probably best to start off with the given unittest example. Some standard best practices:
put all your tests in a tests folder at the root of your project.
write one test module for each python module you're testing.
test modules should start with the word test.
test methods should start with the word test.
When you've become comfortable with unittest (and it shouldn't take long), there are some nice extensions to it that will make life easier as your tests grow in number and scope:
nose -- easily find and run all your tests, and more.
testoob -- colorized output (and more, but that's why I use it).
pythoscope -- haven't tried it, but this will automatically generate (failing) test stubs for your application. Should save a lot of time writing boilerplate code.
Here's an example and you might want to read a little more on pythons unit testing.

Categories