I have a function raising a custom exception type. To my own annoyance my pedantic nature lets me want a docstring containing a doctest snippet:
# my_package/my_module.py
class Mayday(ValueError): pass
def raisins():
"""
>>> raisins()
Traceback (most recent call last):
...
my_module.Mayday: Yikes!
"""
raise Mayday('Yikes!')
# EOF, i.e. nothing calling doctest.testmethod() explicitly
Now, this works fine when run within my local PyCharm using an automatically created run configuration (Run 'Doctests in my_module'). It fails basically anywhere else, i.e. when invoking python -m pytest --doctest-modules (under various environments). The problem is the scope of the exception type which then prepends the package name:
Expected: ... my_module.Mayday: Yikes!
Got: ... my_package.my_module.Mayday: Yikes!
Interestingly calling raisins() within the PyCharm provided Python console (which means it should use the very same environment) yields an exception type including the package name and by this - in a way - contradicts itself.
There must be a difference in invoking doctest I am not aware of. What is it?
Like every good programmer I adapted the tests so everything is green now. +cough+ Still this is not really a satisfying solution. Is there a way to succeed both ways (maybe by a clever way to import within the docstring)?
CAUTION: This situation was happened because of a mistake. Check my answer.
I have a python file (myfile.py) which I want to test its content with Pytest. (The content is changing dynamically)
I've wrote this code:
import importlib
def test_myfile(capsys, monkeypatch):
monkeypatch.setattr('builtins.input', lambda s: "some_input")
# Create a module from the custom Problem file and import it
my_module = importlib.import_module("myfile")
# Rest of the test script
when I run the test I'm getting this error:
OSError: reading from stdin while output is captured
The error has been produced because there is an input() instruction in myfile.py and it means that mocking that function was futile.
My Question:
How can I mock out some functions inside a module I want to import?
Finally I found that I was looking wrong place to find a solution.
actually I use pytest to check a learner response and grade it. this was the way I did that at the moment I asked this question.
py.test test_runner.py test_case.py -v
This will test the user response which is saved in test_case.py ( I get the second parameter inside my test method and load its content and for example run a desired function ). Then I was examining the report of pytest to see if there was an error or failure to decide about the result. (pass/failure)
normally pytest was failing if there was an error in users code (e.g. syntax error) or if the test was failing (e.g. didn't return what it must return)
This time there was an error I didn't want to stop the test. I want to mock out input() function in users code. when I was running the command, before running my test method, pytest imported both files to collect test methods. It was failing to import test_case.py because of the input() function. It was not even reach the line I asked to mock that function and failed in the init stage.
At last to fix the problem I've added a parameter to py.test and now I'm running test process like this:
py.test test_runner.py --program test_case.py -v
In this form pytest doesn't look in test_case.py for test methods and doesn't fail.
hope this experience helps someone else
I'm getting a cryptic (and unhelpful, probably false) error in my nosetests script in this format (function name has been anonymized to "some_function", that's the one I wrote and nose isn't invoking it correctly):
File "/Users/REDACTED/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/Users/REDACTED/lib/python2.7/site-packages/nose/util.py", line 620, in newfunc
self.test(*self.arg)
TypeError: some_function() takes exactly 1 argument (0 given)
This error isn't useful as it doesn't provide details on the source of the problem. Additionally, manually running through all the testing functions in the test script (example: nosetests tests/test_myprogram.py:test_some_function()) produces no errors.
I also manually checked the tests in order for variables that are shared across tests (to verify that leftover data changes from a previous test aren't corrupting later tests).
Due diligence: All searches on the topic turn up nothing useful:
https://www.google.com/search?q=nosetests+show+full+context+of+error+instead+of+runTest+and+newfunc&ie=utf-8&oe=utf-8&client=firefox-b-1
https://www.google.com/search?q=nosetests+show+code+location+source+of+all+errors&ie=utf-8&oe=utf-8&client=firefox-b-1
https://www.google.com/search?q=why+doesn%27t+nosetests+show+the+origin+of+an+error&ie=utf-8&oe=utf-8&client=firefox-b-1
https://www.google.com/search?q=nosetests+show+source+of+typeerror&ie=utf-8&oe=utf-8&client=firefox-b-1
https://www.google.com/search?q=nosetests+fails+to+produce+full+stack+trace%2C+returns+only+newFunc+and+runTest&ie=utf-8&oe=utf-8&client=firefox-b-1
Found the problem.
https://github.com/nose-devs/nose/issues/294
There's an insidious issue with nosetests where if you import functions with "test" in their name into a test script, they get incorrectly flagged as test functions and run as such. If they take any arguments, nosetests will run them with none and immediately produce an untraceable error.
For example:
foo.py:
def prepare_test_run_program(params):
# some programming here, could be anything
print("...")
if (some conditions): # pseudocode, use your imagination
return True
else:
return False
And now a matching test script test_foo.py:
from foo import prepare_test_run_program
from nose.tools import assert_equal
def test_prepare_test_run_program():
params = ... # some parameter settings to test
assert_equal(prepare_test_run_program(params), True)
Now run the test script from the command line:
nosetests test_foo.py
and you will get an TypeError traceable only to runTest and newFunc (as mentioned in the question):
TypeError: prepare_test_run_program() takes at least 1 argument (0 given)
The best way to fix this: Set the test variable in the function mistaken for a test to False. The link I mentioned doesn't properly display the double underscores due to site-specific formatting, so here's my modified example test_foo.py, fixed for nosetests:
from foo import prepare_test_run_program
from nose.tools import assert_equal
prepare_test_run_program.__test__ = False # tell nose that this function isn't a test
def test_prepare_test_run_program():
params = ... # some parameter settings to test
assert_equal(prepare_test_run_program(params), True)
I'm looking specifically at the following build:
https://travis-ci.org/ababic/wagtailmenus/builds/267670218
All jobs seem to be reporting as successful, even though they all have a single, deliberately failing test, and this has been happening on different builds on the same project for at least the last 2 days.
The configuration in my .travis.yml hasn't changed significantly in a while, apart from switching to 'trusty' from 'precise' - and changing that back seems not to fix the issue.
My tox.ini hasn't been changed in a while either.
I tried forcing tox to an earlier version already, which didn't seem to help.
I know it's got to be something to do with tox or Travis, but that's where my knowledge ends. Any help at all would be greatly appreciated.
I had a look at the project and this has nothing to do with either tox or travis. The problem is that runtests.py used in tox always returns with exitcode 0 whatever happens. Tox (and in extension Travis) needs an exitcode != 0 to be able to know that something went wrong.
relevant code in runtests.py:
[...]
def runtests():
[...]
try:
execute_from_command_line(argv)
except:
pass
if __name__ == '__main__':
runtests()
I did not check what execute execute_from_command_line exactly does but I would reckon that it returns an error code if something went wrong (or raises an exception if something went really wrong).
Therefore I would rewrite the code above like this:
import sys
[...]
def runtests():
[...]
return execute_from_command_line(argv)
if __name__ == '__main__':
sys.exit(runtests())
This way you pass through whatever the function you run has to report about the outcome of your tests and exit the script with that as error code or if an exception is raised, the traceback is printed and the script also returns with a non zero code.
I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)