pytest capsys with custom exception - python

I am having trouble testing the output of a custom exception in pytest.
import pytest
class CustomException(Exception):
def __init__(self, extra_message: str):
self.message = extra_message
super().__init__(self.message)
# Should not need to print this as Exception already does this
# print(self.message)
def test_should_get_capsys_output(capsys):
with pytest.raises(CustomException):
raise CustomException("This should be here.")
out, err = capsys.readouterr()
# This should not be true
assert out == ''
assert err == ''
assert 'This' not in out
This example should not pass as I should be able to assert something came out in the output. If I print(self.message) I end up getting the message printed twice when actually used but only then does capsys collect stdout.
I've also tried with variations of caplog and capfd to no avail. [This SO solution] recommends adding an output to the with pytest.raises(...) as info and testing the info but I would have expected capsys to work as well.
Thank you for your time.

I'm sort of confused by what you're asking?
A python exception will stop execution of the current test, even within pytest. So by raising the exception inside a context manager (with statement) we're simulating/catching it before it gets a chance to stop our current test and go to stderr.

Related

How to assert logging error which is followed by a sys.exit()

I am using Python's logging module to generate an ERROR message in specific cases, followed by a sys.exit().
if platform is None:
logging.error(f'No platform provided!')
sys.exit()
Continue do other stuff
Now I am using pytest to unit test the specific error message. However the sys.exit() statement causes pytest to detect an error because of a SystemExit event, even if the error message passes the test.
And mocking the sys.exit makes that the rest of the code is being run ('Continue do other stuff') which then causes other problems.
I have tried the following:
LOGGER = logging.getLogger(__name__)
platform = None
data.set_platform(platform, logging=LOGGER)
assert "No platform provided!" in caplog.text
This question is a similar: How to assert both UserWarning and SystemExit in pytest , but it raises the error in a different way.
How do I make pytest ignore the SystemExit?
Here is one approach.
In your test module you could write the following test, where your_module is the name of the module where your actual code is defined, and function() is the function that is doing the logging and calling sys.exit().
import logging
from unittest import mock
from your_module import function
def test_function(caplog):
with pytest.raises(SystemExit):
function()
log_record = caplog.records[0]
assert log_record.levelno == logging.ERROR
assert log_record.message == "No platform provided!"
assert log_record.lineno == 8 # Replace with the line no. in which log is actually called in the main code.
(If you want to shorten this a little bit, you can use record_tuples instead of records.)
EDIT: Using caplog instead of mocking the log module.

Mock an instance method with a certain return value?

I have a method under test that looks like this:
def execute_update(self):
"""Execute an update."""
p = subprocess.Popen(['command'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
try:
stdout, stderr = p.communicate()
if p.returncode == 0:
# successful update, notify
self.logger.info("Successfully updated.")
else:
# failed update, notify
self.logger.error("Unable to update (code {returncode}):\n{output}".format(
returncode=p.returncode, output=stdout))
except KeyboardInterrupt as e:
# terminate the process
p.terminate()
raise e
I'm attempting to test its invocation of Popen and its invocation of the logging functions in a unittest.TestCase test method with mock:
#mock.patch.object(subprocess.Popen, 'communicate', autospec=True)
#mock.patch('updateservice.subprocess.Popen', autospec=True)
def test_fetch_metadata(self, mock_popen, mock_communicate):
"""Test that we can fetch metadata correctly."""
mock_communicate.return_value = ("OUT", "ERR")
mock_popen.returncode = 0
self.reference.execute_update()
# other asserts
The last line fails with:
stdout, stderr = p.communicate()
ValueError: need more than 0 values to unpack
What am I doing wrong? I have the following requirements:
Test that the constructor to subprocess.Popen was called with the right values.
Test that the logging calls are running with the output and return code from the process.
Number two is easy enough, I'm just injecting a MagicMock as the logger object, but I"m having trouble with number one.
I think the main problem is coming from your patch object here:
#mock.patch.object(subprocess.Popen, 'communicate', autospec=True)
Oddly enough, it seems like the type of mock that is being created is:
<class 'unittest.mock.NonCallableMagicMock'>
This is the first time I have come across a NonCallableMagicMock type before, but looking at the minimal information I found on this, the docs specify this:
The part that raises a flag for me is here:
with the exception of return_value and side_effect which have no
meaning on a non-callable mock.
It would require further investigation to determine what that exactly means. Taking that in to consideration, and maybe you have tried this already, the following change to your unittest yields successful mocking results:
#mock.patch('server.upd.subprocess.Popen', autospec=True)
def test_fetch_metadata(self, mock_popen):
"""Test that we can fetch metadata correctly."""
mock_popen.return_value = Mock()
mock_popen_obj = mock_popen.return_value
mock_popen_obj.communicate.return_value = ("OUT", "ERR")
mock_popen_obj.returncode = 0
self.reference.execute_update()
So, as you can see, we are pretty much creating our mock object per the mock_popen.return_value. From there everything else is pretty much aligned with what you were doing.

Why does pytest + xdist not capture output?

I'm using pytest with pytest-xdist for parallel test running. It doesn't seem to honour the -s option for passing through the standard output to the terminal as the tests are run. Is there any way to make this happen? I realise this could cause the output from the different processes to be jumbled up in the terminal but I'm ok with that.
I found a workaround, although not a full solution. By redirecting stdout to stderr, the output of print statements is displayed. This can be accomplished with a single line of Python code:
sys.stdout = sys.stderr
If placed in conftest.py, it applies to all tests.
I used the followed code:
# conftest.py
import _pytest.capture
def get_capman(plugin_manager):
capman_list = filter(lambda p: isinstance(p, _pytest.capture.CaptureManager), plugin_manager._plugins)
return capman_list[0] if len(capman_list) == 1 else None
def get_xdist_slave(plugin_manager):
# TODO: have no idea how to check isinstance "__channelexec__.SlaveInteractor"
slave_list = filter(lambda p: hasattr(p, 'slaveid'), plugin_manager._plugins)
return slave_list[0] if len(slave_list) == 1 else None
def is_remote_xdist_session(plugin_manager):
return get_xdist_slave(plugin_manager) is not None
def pytest_configure(config):
if is_remote_xdist_session(config.pluginmanager) and get_capman(config.pluginmanager) is not None:
capman = get_capman(config.pluginmanager)
capman._method = "no"
capman.reset_capturings()
capman.init_capturings()
Insert it to conftest.py
The main thing is to be sure that it is remote session, and we have to reconfigure CaptureManager instance.
There one unresolved issue is how to check that remote object has "__channelexec__.SlaveInteractor" type.

Run Python unittest so that nothing is printed if successful, only AssertionError() if fails

I have a test module in the standard unittest format
class my_test(unittest.TestCase):
def test_1(self):
[tests]
def test_2(self):
[tests]
etc....
My company has a proprietary test harness that will execute my module as a command line script, and which will catch any errors raised by my module, but requires that my module be mute if successful.
So, I am trying to find a way to run my test module naked, so that if all my tests pass then nothing is printed to the screen, and if a test fails with an AssertionError, that error gets piped through the standard Python error stack (just like any other error would in a normal Python script.)
The docs advocate using the unittest.main() function to run all the tests in a given module like
if __name__ == "__main__":
unittest.main()
The problem is that this wraps the test results in unittest's harness, so that even if all tests are successful, it still prints some fluff to the screen, and if there is an error, it's not simply dumped as a usual python error, but also dressed in the harness.
I've tried redirecting the output to an alternate stream using
with open('.LOG','a') as logf:
suite = unittest.TestLoader().loadTestsFromTestCase(my_test)
unittest.TextTestRunner(stream = logf).run(suite)
The problem here is that EVERYTHING gets piped to the log file (including all notice of errors). So when my companies harness runs the module, it complete's successfully because, as far as it can tell, no errors were raised (because they were all piped to the log file).
Any suggestions on how I can construct a test runner that suppresses all the fluff, and pipes errors through the normal Python error stack? As always, if you think there is a better way to approach this problem, please let me know.
EDIT:
Here is what I ended up using to resolve this. First, I added a "get_test_names()" method to my test class:
class my_test(unittest.TestCase):
etc....
#staticmethod
def get_test_names():
"""Return the names of all the test methods for this class."""
test_names = [ member[0] for memeber in inspect.getmembers(my_test)
if 'test_' in member[0] ]
Then I replaced my call to unittest.main() with the following:
# Unittest catches all errors raised by the test cases, and returns them as
# formatted strings inside a TestResult object. In order for the test
# harness to catch these errors they need to be re-raised, and so I am defining
# this CompareError class to do that.
# For each code error, a CompareError will be raised, with the original error
# stack as the argument. For test failures (i.e. assertion errors) an
# AssertionError is raised.
class CompareError(Exception):
def __init__(self,err):
self.err = err
def __str__(self):
return repr(self.err)
# Collect all tests into a TestSuite()
all_tests = ut.TestSuite()
for test in my_test.get_test_names():
all_tests.addTest(my_test(test))
# Define a TestResult object and run tests
results = ut.TestResult()
all_tests.run(results)
# Re-raise any script errors
for error in results.errors:
raise CompareError(error[1])
# Re-raise any test failures
for failure in results.failures:
raise AssertionError(failure[1])
I came up with this. If you are able to change the command line you might remove the internal io redirection.
import sys, inspect, traceback
# redirect stdout,
# can be replaced by testharness.py > /dev/null at console
class devnull():
def write(self, data):
pass
f = devnull()
orig_stdout = sys.stdout
sys.stdout = f
class TestCase():
def test_1(self):
print 'test_1'
def test_2(self):
raise AssertionError, 'test_2'
def test_3(self):
print 'test_3'
if __name__ == "__main__":
testcase = TestCase()
testnames = [ t[0] for t in inspect.getmembers(TestCase)
if t[0].startswith('test_') ]
for testname in testnames:
try:
getattr(testcase, testname)()
except AssertionError, e:
print >> sys.stderr, traceback.format_exc()
# restore
sys.stdout = orig_stdout

How can I determine if any errors were logged during a python program's execute?

I have a python script which calls log.error() and log.exception() in several places. These exceptions are caught so that the script can continue to run, however, I would like to be able to determine if log.error() and/or log.exception() were ever called so I can exit the script with an error code by calling sys.exit(1). A naive implementation using an "error" variable is included below. It seems to me there must be a better way.
error = False
try:
...
except:
log.exception("Something bad occurred.")
error = True
if error:
sys.exit(1)
I had the same issue as the original poster: I wanted to exit my Python script with an error code if any messages of error or greater severity were logged. For my application, it's desirable for execution to continue as long as no unhandled exceptions are raised. However, continuous integrations builds should fail if any errors are logged.
I found the errorhandler python package, which does just what we need. See the GitHub, PyPI page, and docs.
Below is the code I used:
import logging
import sys
import errorhandler
# Track if message gets logged with severity of error or greater
error_handler = errorhandler.ErrorHandler()
# Also log to stderr
stream_handler = logging.StreamHandler(stream=sys.stderr)
logger = logging.getLogger()
logger.setLevel(logging.INFO) # Set whatever logging level for stderr
logger.addHandler(stream_handler)
# Do your program here
if error_handler.fired:
logger.critical('Failure: exiting with code 1 due to logged errors')
raise SystemExit(1)
You can check logger._cache. It returns a dictionary with keys corresponding to the numeric value of the error level logged. So for checking if an error was logged you could do:
if 40 in logger._cache and logger._cache[40]
I think that your solution is not the best option. Logging is one aspect of your script, returning an error code depending on the control flow is another. Perhaps using exceptions would be a better option.
But if you want to track the calls to log, you can wrap it within a decorator. A simple example of a decorator follows (without inheritance or dynamic attribute access):
class LogWrapper:
def __init__(self, log):
self.log = log
self.error = False
def exception(self, message)
self.error = True
self.log.exception(message)
whenever logger._cache is not a solution (when other packages / modules log on their own which won't end in logger._cache), there's a way to build a ContextFilter which will record the worst called log level:
class ContextFilterWorstLevel(logging.Filter):
def __init__(self):
self.worst_level = logging.INFO
def filter(self, record):
if record.levelno > self.worst_level:
self.worst_level = record.levelno
return True
# Create a logger object and add the filter
logger = logging.getLogger()
logger.addFilter(ContextFilterWorstLevel())
# Check the worst log level called later
for filter in logger.filters:
if isinstance(filter, ContextFilterWorstLevel):
print(filter.worst_level)
You can employ a counter. If you want to track individual exceptions, create a dictionary with the exception as the key and the integer counter as the value.

Categories