I am using Python's (3.4.1) unittest module for my unit tests.
I load all my testing module files using imports and then run unittest.main():
import unittest
import testing_module1
import testing_module2
# [...]
if __name__ == '__main__':
unittest.main()
This works perfectly for me as it is simple and respect the command line arguments I use to control verbosity or which test(s) to run.
I want to continue to output the same information, but I would like to generate an XML file from the results. I tried xmlrunner (https://github.com/xmlrunner/unittest-xml-reporting/) but:
it does not output as much info to stdout as the standard runner;
it uses a specific format of the XML that doesn't suites me.
I would like to generate the XML (I don't mind doing it manually) with the format I need but with minimal change to how the tests are run.
What are my options?
I could write my own TestRunner but I don't want to re-write everything, I just want to add extra output to the actual runner with minimal code change.
I could inherit unittest.TextTestRunner but I fear that adding XML output to it would require re-writing every methods, loosing the advantage of inheritance in the first place.
I could try to extract the test results after the call to unittest.main() and parse it. The problem here is that unittest.main() seems to exit when it's done so any code after it is not executed.
Any suggestion?
Thanks!
I ended up writing two new classes, inheriting from unittest.TextTestResult and unittest.TextTestRunner. That way, I could run main like that:
unittest.main(testRunner=xmlrunner.XMLTestRunner(...))
I overloaded unittest.TextTestRunner's __init__ and those from unittest.TextTestResult:
addSuccess()
addError()
addFailure()
addSubTest()
For example:
def addSuccess(self, test):
super().addSuccess(test)
[... store the test into list, dictionary, whatever... ]
Since these add*() functions are called with the actual test, I can store them in a global list and parse them at the end of my XMLTestRunner.run():
def run(self, test):
result = super().run(test)
self.save_xml_report(result)
return result
Note that these functions are normally defined in /usr/lib/python3.4/unittest/runner.py.
Warning note: By using an actual object passed unittest.main()'s testRunner argument as shown, the command line arguments given when launching python are ignored. For example, increasing verbose level with -v argument is ignored. This is because the TestProgram class, defined in /usr/lib/python3.4/unittest/main.py, detects if unittest.main() was run with testRunner being a class or an object (see runTests() near the end of the file). If you give just a class like that:
unittest.main(testRunner=xmlrunner.XMLTestRunner)
then command line arguments are parsed. But you pass an instantiated object (like I need), runTests() will just use it as is. I thus had to parse arguments myself in my XMLTestRunner.__init__():
# Similar to what /usr/lib/python3.4/unittest/main.py's TestProgram._getParentArgParser() does.
import argparse
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument('-v', '--verbose', dest='verbosity',
action='store_const', const=2, default=1, # Add default=1, not present in _getParentArgParser()
help='Verbose output')
parser.add_argument('-q', '--quiet', dest='verbosity',
action='store_const', const=0,
help='Quiet output')
parser.add_argument('-f', '--failfast', dest='failfast',
action='store_true',
help='Stop on first fail or error')
parser.add_argument('-c', '--catch', dest='catchbreak',
action='store_true',
help='Catch ctrl-C and display results so far')
parser.add_argument('-b', '--buffer', dest='buffer',
action='store_true',
help='Buffer stdout and stderr during tests')
How does this work for you. Capture the output of unittest, which goes to sys.stderr, in a StringIO. Continue after unittest.main by adding `exit=False'. Read the captured output and process as you want. Proof of concept:
import contextlib
import io
import sys
import unittest
class Mytest(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
#contextlib.contextmanager
def err_to(file):
old_err = sys.stderr
sys.stderr = file
yield
sys.stderr = old_err
if __name__ == '__main__':
result = io.StringIO()
with err_to(result):
unittest.main(exit=False)
result.seek(0)
print(result.read())
This prints (to sys.stdout)
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Note: contextlib has redirect_stdout, but not redirect_stderr. The above is simpler that the contextlib code. The above assumes that there are no exceptions not caught by unittest. See the contextlib.contextmanager doc for adding try: except: finally. I leave that to you.
I have faced the same issue with catching FAIL events from unittest lib. Following big_gie's answer, this code appeared:
File testFileName_1.py
import unittest
class TestClassToTestSth(unittest.TestCase):
def test_One(self):
self.AssertEqual(True, False, 'Hello world')
import unittest
from io import StringIO
import testFileName_1
def suites():
return [
# your testCase classes, for example
testFileName_1.TestClassToTestSth,
testFileName_445.TestClassToTestSomethingElse,
]
class TextTestResult(unittest.TextTestResult):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.slack = Slack('data-engineering-tests')
def addFailure(self, test, err):
super().addFailure(test, err)
# Whatever you want here
print(err, test)
print(self.failures)
class TextTestRunner(unittest.TextTestRunner):
resultclass = TextTestResult
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
loader = unittest.TestLoader()
suite = unittest.TestSuite()
stream = StringIO()
for test_case in suites():
suite.addTests(loader.loadTestsFromTestCase(test_case))
runner = TextTestRunner(stream=stream)
result = runner.run(suite)
stream.seek(0)
print(stream.read())
Related
I'm doing TDD tests for argparser. How can I test arguments with the option required?I need to test all options like:
too many arguments,
no arguments are given,
the wrong argument is given.
I can raise SystemExit, but this is not really what I need:
def test_no_arguments(self):
with patch.object(sys, 'exit') as mock_method:
self.parser.parse_arguments()
self.assertTrue(mock_method.called)
However, without raising system exit I have always errors like this:
zbx-check-mount.py
class CommandLine:
def __init__(self):
self.args_parser = argparse.ArgumentParser(description="Monitoring mounted filesystems",
formatter_class=argparse.RawTextHelpFormatter)
self.parsed_args = None
self.add_arguments()
def add_arguments(self):
"""
Add arguments to parser.
"""
try:
self.args_parser._action_groups.pop() # pylint: disable=protected-access
required = self.args_parser.add_argument_group('required arguments')
required.add_argument('--fs_name', required=True, help='Given filesystem')
except argparse.ArgumentError as err:
log.error('argparse.ArgumentError: %s', err)
sys.exit(1)
def parse_arguments(self, args=None):
"""
Parse added arguments. Then run private method to return values
"""
self.parsed_args = self.args_parser.parse_args()
return self.parsed_args.fs_name,
tests
from pyfakefs.fake_filesystem_unittest import TestCase
import os
import sys
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
if sys.version_info[0] == 3:
from unittest.mock import MagicMock, patch
else:
from mock import MagicMock, patch
sys.path.extend([os.path.join(os.path.dirname(os.path.abspath(__file__)),'..','..', "bin")])
module_name = __import__('zbx-check-mount')
class TestCommandLine(TestCase):
def setUp(self):
"""
Method called to prepare the test fixture. This is called immediately before calling the test method
"""
self.parser = module_name.CommandLine()
def test_no_arguments(self):
opts = self.parser.parse_arguments([])
assert opts.fs_name
def tearDown(self):
"""
Method called immediately after the test method has been called and the result recorded.
"""
pass
How to avoid this situation and test other options?
In def parse_arguments(self, args=None):, you should pass args on to the parser, as in:
self.args_parser.parse_args(args)
parse_args() parses sys.argv[1:], or if the given argument is None. Otherwise it parses the provided list.
In a full distribution of python there's a unittest file for argparse (test_argparse.py). It's somewhat complex, defining a subclass of ArgumentParser that captures errors and redirects error messages.
Testing argparse is tricky because it looks at sys.argv, with the unittest scripts also use. And it usually tries to exit on errors. This has been discussed in a number of SO questions already.
If I'm interpreting your symptoms correctly, you are having problems in the test harness because your monkey patched implementation of sys.exit actually returns, which the argparse library is not expecting.
Introducing a side_effect that raises an exception, which you can then trap and verify in the unit test, may be sufficient to get around the problem.
I am using a pytest fixture to mock up command-line arguments for testing a script. This way the arguments shared by each test function would only need to be declared in one place. I'm also trying to use pytest's capsys to capture output printed by the script. Consider the following silly example.
from __future__ import print_function
import pytest
import othermod
from sys import stdout
#pytest.fixture
def shared_args():
args = type('', (), {})()
args.out = stdout
args.prefix = 'dude:'
return args
def otherfunction(message, prefix, stream):
print(prefix, message, file=stream)
def test_dudesweet(shared_args, capsys):
otherfunction('sweet', shared_args.prefix, shared_args.out)
out, err = capsys.readouterr()
assert out == 'dude: sweet\n'
Here, capsys does not capture sys.stderr properly. If I move from sys import stdout and args.out = stdout directly into the test function, things work as expected. But this makes things much messier, as I have to re-declare these statements for each test. Am I doing something wrong? Can I use capsys with fixtures?
Fixture is invoked before test is run. In your example, shared_args fixture is reading stdout before otherfunction can write anything to stdout.
One way to fix your problem is to make your fixture return a function which can do what you want it to do. You can scope the fixture according to your use case.
from __future__ import print_function
import pytest
from sys import stdout
import os
#pytest.fixture(scope='function')
def shared_args():
def args_func():
args = type('', (), {})()
args.out = stdout
args.prefix = 'dude:'
return args
return args_func
def otherfunction(message, prefix, stream):
print(prefix, message, file=stream)
def test_dudesweet(shared_args, capsys):
prefix, out = shared_args().prefix, shared_args().out
otherfunction('sweet', prefix, out)
out, err = capsys.readouterr()
assert out == 'dude: sweet\n'
You are not using capsys.readouterr() correctly. See the correct usage of capsys here: https://stackoverflow.com/a/26618230/2312300
Going off of Greg Haskin's answer in this question, I tried to make a unittest to check that argparse is giving the appropriate error when I pass it some args that are not present in the choices. However, unittest generates a false positive using the try/except statement below.
In addition, when I make a test using just a with assertRaises statement, argparse forces the system exit and the program does not execute any more tests.
I would like to be able to have a test for this, but maybe it's redundant given that argparse exits upon error?
#!/usr/bin/env python3
import argparse
import unittest
class sweep_test_case(unittest.TestCase):
"""Tests that the merParse class works correctly"""
def setUp(self):
self.parser=argparse.ArgumentParser()
self.parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
def test_required_unknown_TE(self):
"""Try to perform sweep on something that isn't an option.
Should return an attribute error if it fails.
This test incorrectly shows that the test passed, even though that must
not be true."""
args = ["--color", "NADA"]
try:
self.assertRaises(argparse.ArgumentError, self.parser.parse_args(args))
except SystemExit:
print("should give a false positive pass")
def test_required_unknown(self):
"""Try to perform sweep on something that isn't an option.
Should return an attribute error if it fails.
This test incorrectly shows that the test passed, even though that must
not be true."""
args = ["--color", "NADA"]
with self.assertRaises(argparse.ArgumentError):
self.parser.parse_args(args)
if __name__ == '__main__':
unittest.main()
Errors:
Usage: temp.py [-h] -c {yellow,blue}
temp.py: error: argument -c/--color: invalid choice: 'NADA' (choose from 'yellow', 'blue')
E
usage: temp.py [-h] -c {yellow,blue}
temp.py: error: argument -c/--color: invalid choice: 'NADA' (choose from 'yellow', 'blue')
should give a false positive pass
.
======================================================================
ERROR: test_required_unknown (__main__.sweep_test_case)
Try to perform sweep on something that isn't an option.
----------------------------------------------------------------------
Traceback (most recent call last): #(I deleted some lines)
File "/Users/darrin/anaconda/lib/python3.5/argparse.py", line 2310, in _check_value
raise ArgumentError(action, msg % args)
argparse.ArgumentError: argument -c/--color: invalid choice: 'NADA' (choose from 'yellow', 'blue')
During handling of the above exception, another exception occurred:
Traceback (most recent call last): #(I deleted some lines)
File "/anaconda/lib/python3.5/argparse.py", line 2372, in exit
_sys.exit(status)
SystemExit: 2
The trick here is to catch SystemExit instead of ArgumentError. Here's your test rewritten to catch SystemExit:
#!/usr/bin/env python3
import argparse
import unittest
class SweepTestCase(unittest.TestCase):
"""Tests that the merParse class works correctly"""
def setUp(self):
self.parser=argparse.ArgumentParser()
self.parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
def test_required_unknown(self):
""" Try to perform sweep on something that isn't an option. """
args = ["--color", "NADA"]
with self.assertRaises(SystemExit):
self.parser.parse_args(args)
if __name__ == '__main__':
unittest.main()
That now runs correctly, and the test passes:
$ python scratch.py
usage: scratch.py [-h] -c {yellow,blue}
scratch.py: error: argument -c/--color: invalid choice: 'NADA' (choose from 'yellow', 'blue')
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
However, you can see that the usage message is getting printed, so your test output is a bit messed up. It might also be nice to check that the usage message contains "invalid choice".
You can do that by patching sys.stderr:
#!/usr/bin/env python3
import argparse
import unittest
from io import StringIO
from unittest.mock import patch
class SweepTestCase(unittest.TestCase):
"""Tests that the merParse class works correctly"""
def setUp(self):
self.parser=argparse.ArgumentParser()
self.parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
#patch('sys.stderr', new_callable=StringIO)
def test_required_unknown(self, mock_stderr):
""" Try to perform sweep on something that isn't an option. """
args = ["--color", "NADA"]
with self.assertRaises(SystemExit):
self.parser.parse_args(args)
self.assertRegexpMatches(mock_stderr.getvalue(), r"invalid choice")
if __name__ == '__main__':
unittest.main()
Now you only see the regular test report:
$ python scratch.py
.
----------------------------------------------------------------------
Ran 1 test in 0.002s
OK
For pytest users, here's the equivalent that doesn't check the message.
import argparse
import pytest
def test_required_unknown():
""" Try to perform sweep on something that isn't an option. """
parser=argparse.ArgumentParser()
parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
args = ["--color", "NADA"]
with pytest.raises(SystemExit):
parser.parse_args(args)
Pytest captures stdout/stderr by default, so it doesn't pollute the test report.
$ pytest scratch.py
================================== test session starts ===================================
platform linux -- Python 3.6.7, pytest-3.5.0, py-1.7.0, pluggy-0.6.0
rootdir: /home/don/.PyCharm2018.3/config/scratches, inifile:
collected 1 item
scratch.py . [100%]
================================ 1 passed in 0.01 seconds ================================
You can also check the stdout/stderr contents with pytest:
import argparse
import pytest
def test_required_unknown(capsys):
""" Try to perform sweep on something that isn't an option. """
parser=argparse.ArgumentParser()
parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
args = ["--color", "NADA"]
with pytest.raises(SystemExit):
parser.parse_args(args)
stderr = capsys.readouterr().err
assert 'invalid choice' in stderr
As usual, I find pytest easier to use, but you can make it work in either one.
While the parser may raise an ArgumentError during parsing a specific argument, that is normally trapped, and passed to parser.error and parse.exit. The result is that the usage is printed, along with an error message, and then sys.exit(2).
So asssertRaises is not a good way of testing for this kind of error in argparse. The unittest file for the module, test/test_argparse.py has an elaborate way of getting around this, the involves subclassing the ArgumentParser, redefining its error method, and redirecting output.
parser.parse_known_args (which is called by parse_args) ends with:
try:
namespace, args = self._parse_known_args(args, namespace)
if hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR):
args.extend(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
delattr(namespace, _UNRECOGNIZED_ARGS_ATTR)
return namespace, args
except ArgumentError:
err = _sys.exc_info()[1]
self.error(str(err))
=================
How about this test (I've borrowed several ideas from test_argparse.py:
import argparse
import unittest
class ErrorRaisingArgumentParser(argparse.ArgumentParser):
def error(self, message):
#print(message)
raise ValueError(message) # reraise an error
class sweep_test_case(unittest.TestCase):
"""Tests that the Parse class works correctly"""
def setUp(self):
self.parser=ErrorRaisingArgumentParser()
self.parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
def test_required_unknown(self):
"""Try to perform sweep on something that isn't an option.
Should pass"""
args = ["--color", "NADA"]
with self.assertRaises(ValueError) as cm:
self.parser.parse_args(args)
print('msg:',cm.exception)
self.assertIn('invalid choice', str(cm.exception))
if __name__ == '__main__':
unittest.main()
with a run:
1931:~/mypy$ python3 stack39028204.py
msg: argument -c/--color: invalid choice: 'NADA' (choose from 'yellow', 'blue')
.
----------------------------------------------------------------------
Ran 1 test in 0.002s
OK
With many of the great answers above, I see that in the setUp method a parser instance is created inside our test and an argument is added to it, effectively causing the test to be of argparse's implementation. This, of course, could be a valid test/use case but wouldn't necessarily test a script's or application's specific use of argparse.
I think Yauhen Yakimovich's answer gives good insight into how to make use of argparse in a pragmatic way. While I haven't embraced it fully, I thought a simplified test method is possible via a parser generator and an override.
I've opted for testing my code rather than argparse's implementation. To achieve this we'll want to utilize a factory to create the parser in our code that holds all the argument definitions. This facilitates testing our own parser in setUp.
// my_class.py
import argparse
class MyClass:
def __init__(self):
self.parser = self._create_args_parser()
def _create_args_parser():
parser = argparse.ArgumentParser()
parser.add_argument('--kind',
action='store',
dest='kind',
choices=['type1', 'type2'],
help='kind can be any of: type1, type2')
return parser
In our test, we can generate our parser and test against it. We will override the error method to ensure we don't get trapped in argparse's ArgumentError evaluation.
import unittest
from my_class import MyClass
class MyClassTest(unittest.TestCase):
def _redefine_parser_error_method(self, message):
raise ValueError
def setUp(self):
parser = MyClass._create_args_parser()
parser.error = self._redefine_parser_error_func
self.parser = parser
def test_override_certificate_kind_arguments(self):
args = ['--kind', 'not-supported']
expected_message = "argument --kind: invalid choice: 'not-supported'.*$"
with self.assertRaisesRegex(ValueError, expected_message):
self.parser.parse_args(args)
This might not be the absolute best answer but I find it nice to use our own parser's arguments and test that part by simply testing against an exception we know should only happen in the test itself.
If you look in the error log, you can see that a argparse.ArgumentError is raised and not an AttributeError. your code should look like this:
#!/usr/bin/env python3
import argparse
import unittest
from argparse import ArgumentError
class sweep_test_case(unittest.TestCase):
"""Tests that the merParse class works correctly"""
def setUp(self):
self.parser=argparse.ArgumentParser()
self.parser.add_argument(
"-c", "--color",
type=str,
choices=["yellow", "blue"],
required=True)
def test_required_unknown_TE(self):
"""Try to perform sweep on something that isn't an option.
Should return an attribute error if it fails.
This test incorrectly shows that the test passed, even though that must
not be true."""
args = ["--color", "NADA"]
try:
self.assertRaises(ArgumentError, self.parser.parse_args(args))
except SystemExit:
print("should give a false positive pass")
def test_required_unknown(self):
"""Try to perform sweep on something that isn't an option.
Should return an attribute error if it fails.
This test incorrectly shows that the test passed, even though that must
not be true."""
args = ["--color", "NADA"]
with self.assertRaises(ArgumentError):
self.parser.parse_args(args)
if __name__ == '__main__':
unittest.main()
If you look into the source code of argparse, in argparse.py, around line 1732 (my python version is 3.5.1), there is a method of ArgumentParser called parse_known_args. The code is:
# parse the arguments and exit if there are any errors
try:
namespace, args = self._parse_known_args(args, namespace)
if hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR):
args.extend(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
delattr(namespace, _UNRECOGNIZED_ARGS_ATTR)
return namespace, args
except ArgumentError:
err = _sys.exc_info()[1]
self.error(str(err))
So, the ArgumentError will be swallowed by argparse, and exit with an error code. If you want to test this anyway, the only way I could think out is mocking sys.exc_info.
I know this is an old question but just to expand on #don-kirkby's answer of looking for SystemExit – but without having to use pytest or patching – you can wrap the testcode in contextlib.redirect_stderr, if you want to assert something about the error message:
import contextlib
from io import StringIO
import unittest
class MyTest(unittest.TestCase):
def test_foo(self):
ioerr = StringIO()
with contextlib.redirect_stderr(ioerr):
with self.assertRaises(SystemExit) as err:
foo('bad')
self.assertEqual(err.exception.code, 2)
self.assertIn("That is a 'bad' thing", ioerr.getvalue())
I had a similar problem with the same error of argparse (exit 2) and corrected it capturing the first element of tuple that parse_known_args() return, an argparse.Namespace object.
def test_basics_options_of_parser(self):
parser = w2ptdd.get_parser()
# unpacking tuple
parser_name_space,__ = parser.parse_known_args()
args = vars(parser_name_space)
self.assertFalse(args['unit'])
self.assertFalse(args['functional'])
Function foo prints to console. I want to test the console print. How can I achieve this in python?
Need to test this function, has NO return statement :
def foo(inStr):
print "hi"+inStr
My test :
def test_foo():
cmdProcess = subprocess.Popen(foo("test"), stdout=subprocess.PIPE)
cmdOut = cmdProcess.communicate()[0]
self.assertEquals("hitest", cmdOut)
You can easily capture standard output by just temporarily redirecting sys.stdout to a StringIO object, as follows:
import StringIO
import sys
def foo(inStr):
print "hi"+inStr
def test_foo():
capturedOutput = StringIO.StringIO() # Create StringIO object
sys.stdout = capturedOutput # and redirect stdout.
foo('test') # Call unchanged function.
sys.stdout = sys.__stdout__ # Reset redirect.
print 'Captured', capturedOutput.getvalue() # Now works as before.
test_foo()
The output of this program is:
Captured hitest
showing that the redirection successfully captured the output and that you were able to restore the output stream to what it was before you began the capture.
Note that the code above in for Python 2.7, as the question indicates. Python 3 is slightly different:
import io
import sys
def foo(inStr):
print ("hi"+inStr)
def test_foo():
capturedOutput = io.StringIO() # Create StringIO object
sys.stdout = capturedOutput # and redirect stdout.
foo('test') # Call function.
sys.stdout = sys.__stdout__ # Reset redirect.
print ('Captured', capturedOutput.getvalue()) # Now works as before.
test_foo()
This Python 3 answer uses unittest.mock. It also uses a reusable helper method assert_stdout, although this helper is specific to the function being tested.
import io
import unittest
import unittest.mock
from .solution import fizzbuzz
class TestFizzBuzz(unittest.TestCase):
#unittest.mock.patch('sys.stdout', new_callable=io.StringIO)
def assert_stdout(self, n, expected_output, mock_stdout):
fizzbuzz(n)
self.assertEqual(mock_stdout.getvalue(), expected_output)
def test_only_numbers(self):
self.assert_stdout(2, '1\n2\n')
Note that the mock_stdout arg is passed automatically by the unittest.mock.patch decorator to the assert_stdout method.
A general-purpose TestStdout class, possibly a mixin, can in principle be derived from the above.
For those using Python ≥3.4, contextlib.redirect_stdout also exists, but it seems to serve no benefit over unittest.mock.patch.
If you happen to use pytest, it has builtin output capturing. Example (pytest-style tests):
def eggs():
print('eggs')
def test_spam(capsys):
eggs()
captured = capsys.readouterr()
assert captured.out == 'eggs\n'
You can also use it with unittest test classes, although you need to passthrough the fixture object into the test class, for example via an autouse fixture:
import unittest
import pytest
class TestSpam(unittest.TestCase):
#pytest.fixture(autouse=True)
def _pass_fixtures(self, capsys):
self.capsys = capsys
def test_eggs(self):
eggs()
captured = self.capsys.readouterr()
self.assertEqual('eggs\n', captured.out)
Check out Accessing captured output from a test function for more info.
You can also use the mock package as shown below, which is an example from
https://realpython.com/lessons/mocking-print-unit-tests.
from mock import patch
def greet(name):
print('Hello ', name)
#patch('builtins.print')
def test_greet(mock_print):
# The actual test
greet('John')
mock_print.assert_called_with('Hello ', 'John')
greet('Eric')
mock_print.assert_called_with('Hello ', 'Eric')
The answer of #Acumenus says:
It also uses a reusable helper method assert_stdout, although this helper is specific to the function being tested.
the bold part seems a big drawback, thus I would do the following instead:
# extend unittest.TestCase with new functionality
class TestCase(unittest.TestCase):
def assertStdout(self, expected_output):
return _AssertStdoutContext(self, expected_output)
# as a bonus, this syntactical sugar becomes possible:
def assertPrints(self, *expected_output):
expected_output = "\n".join(expected_output) + "\n"
return _AssertStdoutContext(self, expected_output)
class _AssertStdoutContext:
def __init__(self, testcase, expected):
self.testcase = testcase
self.expected = expected
self.captured = io.StringIO()
def __enter__(self):
sys.stdout = self.captured
return self
def __exit__(self, exc_type, exc_value, tb):
sys.stdout = sys.__stdout__
captured = self.captured.getvalue()
self.testcase.assertEqual(captured, self.expected)
this allows for the much nicer and much more re-usable:
# in a specific test case, the new method(s) can be used
class TestPrint(TestCase):
def test_print1(self):
with self.assertStdout("test\n"):
print("test")
by using a straight forward context manager. (It might also be desirable to append "\n" to expected_output since print() adds a newline by default. See next example...)
Furthermore, this very nice variant (for an arbitrary number of prints!)
def test_print2(self):
with self.assertPrints("test1", "test2"):
print("test1")
print("test2")
is possible now.
You can also capture the standard output of a method using contextlib.redirect_stdout:
import unittest
from contextlib import redirect_stdout
from io import StringIO
class TestMyStuff(unittest.TestCase):
# ...
def test_stdout(self):
with redirect_stdout(StringIO()) as sout:
my_command_that_prints_to_stdout()
# the stream replacing `stdout` is available outside the `with`
# you may wish to strip the trailing newline
retval = sout.getvalue().rstrip('\n')
# test the string captured from `stdout`
self.assertEqual(retval, "whatever_retval_should_be")
Gives you a locally scoped solution. It is also possible to capture the standard error using contextlib.redirect_stderr().
Another variant is leaning on the logging module rather than print(). This module also has a suggestion of when to use print in the documentation:
Display console output for ordinary usage of a command line script or program
PyTest has built-in support for testing logging messages.
I have a test module in the standard unittest format
class my_test(unittest.TestCase):
def test_1(self):
[tests]
def test_2(self):
[tests]
etc....
My company has a proprietary test harness that will execute my module as a command line script, and which will catch any errors raised by my module, but requires that my module be mute if successful.
So, I am trying to find a way to run my test module naked, so that if all my tests pass then nothing is printed to the screen, and if a test fails with an AssertionError, that error gets piped through the standard Python error stack (just like any other error would in a normal Python script.)
The docs advocate using the unittest.main() function to run all the tests in a given module like
if __name__ == "__main__":
unittest.main()
The problem is that this wraps the test results in unittest's harness, so that even if all tests are successful, it still prints some fluff to the screen, and if there is an error, it's not simply dumped as a usual python error, but also dressed in the harness.
I've tried redirecting the output to an alternate stream using
with open('.LOG','a') as logf:
suite = unittest.TestLoader().loadTestsFromTestCase(my_test)
unittest.TextTestRunner(stream = logf).run(suite)
The problem here is that EVERYTHING gets piped to the log file (including all notice of errors). So when my companies harness runs the module, it complete's successfully because, as far as it can tell, no errors were raised (because they were all piped to the log file).
Any suggestions on how I can construct a test runner that suppresses all the fluff, and pipes errors through the normal Python error stack? As always, if you think there is a better way to approach this problem, please let me know.
EDIT:
Here is what I ended up using to resolve this. First, I added a "get_test_names()" method to my test class:
class my_test(unittest.TestCase):
etc....
#staticmethod
def get_test_names():
"""Return the names of all the test methods for this class."""
test_names = [ member[0] for memeber in inspect.getmembers(my_test)
if 'test_' in member[0] ]
Then I replaced my call to unittest.main() with the following:
# Unittest catches all errors raised by the test cases, and returns them as
# formatted strings inside a TestResult object. In order for the test
# harness to catch these errors they need to be re-raised, and so I am defining
# this CompareError class to do that.
# For each code error, a CompareError will be raised, with the original error
# stack as the argument. For test failures (i.e. assertion errors) an
# AssertionError is raised.
class CompareError(Exception):
def __init__(self,err):
self.err = err
def __str__(self):
return repr(self.err)
# Collect all tests into a TestSuite()
all_tests = ut.TestSuite()
for test in my_test.get_test_names():
all_tests.addTest(my_test(test))
# Define a TestResult object and run tests
results = ut.TestResult()
all_tests.run(results)
# Re-raise any script errors
for error in results.errors:
raise CompareError(error[1])
# Re-raise any test failures
for failure in results.failures:
raise AssertionError(failure[1])
I came up with this. If you are able to change the command line you might remove the internal io redirection.
import sys, inspect, traceback
# redirect stdout,
# can be replaced by testharness.py > /dev/null at console
class devnull():
def write(self, data):
pass
f = devnull()
orig_stdout = sys.stdout
sys.stdout = f
class TestCase():
def test_1(self):
print 'test_1'
def test_2(self):
raise AssertionError, 'test_2'
def test_3(self):
print 'test_3'
if __name__ == "__main__":
testcase = TestCase()
testnames = [ t[0] for t in inspect.getmembers(TestCase)
if t[0].startswith('test_') ]
for testname in testnames:
try:
getattr(testcase, testname)()
except AssertionError, e:
print >> sys.stderr, traceback.format_exc()
# restore
sys.stdout = orig_stdout