Enable Python code only for unittests? - python

Let's say I have the following function:
def f():
if TESTING:
# Run expensive sanity check code
...
What is the correct way to run the TESTING code block only if we are running a unittest?
[edit: Is there some "global" variable I can access to find out if unittests are on?]

Generally, I'd suggest not doing this. Your production-code really shouldn't realize that the unit-tests exist. One reason for this, is that you could have code in your if TESTING block that makes the tests pass (accidentally), and since production runs of your code won't run these bits, could leave you exposed to failure in production even when your tests pass.
However, if you insist of doing this, there are two potential ways (that I can think of) this can be done.
First of, you could use a module level TESTING var that you set in your test case to True. For example:
Production Code:
TESTING = False # This is false until overridden in tests
def foo():
if TESTING:
print "expensive stuff..."
Unit-Test Code:
import production
def test_foo():
production.TESTING = True
production.foo() # Prints "expensive stuff..."
The second way is to use python's builtin assert keyword. When python is run with -O, the interpreter will strip (or ignore) all assert statements in your code, allowing you to sprinkle these expensive gems throughout and know they will not be run if it is executed in optimized mode. Just be sure to run your tests without the -O flag.
Example (Production Code):
def expensive_checks():
print "expensive stuff..."
return True
def foo():
print "normal, speedy stuff."
assert expensive_checks()
foo()
Output (run with python mycode.py)
normal, speedy stuff.
expensive stuff...
Output (run with python -O mycode.py)
normal, speedy stuff.
One word of caution about the assert statements... if the assert statement does not evaluate to a true value, an AssertionError will be raised.

Related

Is there a way to run several time a combinaison of python code and Pytest tests automatically?

I am looking to automate the process where:
I run some python code,
then run a set of tests using pytest
then, if all tests are validated, start the process again with new data.
I am thinking of writing a script executing the python code, then calling pytest using pytest.main(), check with the help of the exit code that all tests passed and in case of success start again.
The issue is that it is stated in pytest docs (https://docs.pytest.org/en/stable/usage.html) that it is not recommended to make multiple calls to pytest.main():
Note from pytest docs:
"Calling pytest.main() will result in importing your tests and any modules that they import. Due to the caching mechanism of python’s import system, making subsequent calls to pytest.main() from the same process will not reflect changes to those files between the calls. For this reason, making multiple calls to pytest.main() from the same process (in order to re-run tests, for example) is not recommended."
I was woundering if it was ok to call pytest.main() the way I intend to or if there was any better way to achieve what I am looking for?
I've made a simple example to make it problem more clear:
A = [0]
def some_action(x):
x[0] += 1
if __name__ == '__main__':
print('Initial value of A: {}'.format(A))
for i in range(10):
if i == 5:
# one test in test_mock2 that fails
test_dir = "./tests/functional_tests/test_mock2.py"
else:
# two tests in test_mock that pass
test_dir = "./tests/functional_tests/test_mock.py"
some_action(A)
check_tests = int(pytest.main(["-q", "--tb=no", test_dir]))
if check_tests != 0:
print('Interrupted at i={} because of tests failures'.format(i))
break
if i > 5:
print('All tests validated, final value of A: {}'.format(A))
else:
print('final value of A: {}'.format(A))
In this example some_action is executed until i reaches 5, at which point the tests fail and the process of executing/testing is interrupted. It seems to work fine, I'm only concerned because of the comments in pytest docs as stated above
The warning applies to the following sequence of events:
Run pytest.main on some folder which imports a.py, directly or indirectly.
Modify a.py (manually or programatically).
Attempt to rerun pytest.main on the same directory in the same python process as #1
The second run in step #3 will not not see the changes you made to a.py in step #2. That is because python does not import a file twice. Instead, it will check if the file has an entry in sys.modules, and use that instead. It's what lets you import large libraries multiple times without incurring a huge penalty every time.
Modifying the values in imported modules is fine. Python binds names to references, so if you bind something (like a new integer value) to the right name, everyone will be able to see it. Your some_action function is a good example of this. Future tests will run with the modified value if they import your script as a module.
The reason that the caveat is there is that pytest is usually used to test code after it has been modified. The warning is simply telling you that if you modify your code, you need to start pytest.main in a new python process to see the changes.
Since you do not appear to be modifying the code of the files in your test and expecting the changes to show up, the caveat you cite does not apply to you. Keep doing what you are doing.

pytest - is it possible to run a script/command between all test scripts?

OK, this is definitely my fault but I need to clean it up. One of my test scripts fairly consistently (but not always) updates my database in a way that causes problems for the others (basically, it takes away access rights, for the test user, to the test database).
I could easily find out which script is causing this by running a simple query, either after each individual test, or after each test script completes.
i.e. pytest, or nose2, would do the following:
run test_aaa.py
run check_db_access.py #ideal if I could induce a crash/abort
run test_bbb.py
run check_db_access.py
...
You get the idea. Is there a built-in option or plugin that I can use? The test suite currently works on both pytest and nose2 so either is an option.
Edit: this is not a test db, or a fixture-loaded db. This is a snapshot of any of a number of extremely complex live databases and the test suite, as per its design, is supposed to introspect the database(s) and figure out how to run its tests (almost all access is read-only). This works fine and has many beneficial aspects at least in my particular context, but it also means there is no tearDown or fixture-load for me to work with.
import pytest
#pytest.fixture(autouse = True)
def wrapper(request):
print('\nbefore: {}'.format(request.node.name))
yield
print('\nafter: {}'.format(request.node.name))
def test_a():
assert True
def test_b():
assert True
Example output:
$ pytest -v -s test_foo.py
test_foo.py::test_a
before: test_a
PASSED
after: test_a
test_foo.py::test_b
before: test_b
PASSED
after: test_b

Unit testing __main__.py

I have a Python package (Python 3.6, if it makes a difference) that I've designed to run as 'python -m package arguments' and I'd like to write unit tests for the __main__.py module. I specifically want to verify that it sets the exit code correctly. Is it possible to use runpy.run_module to execute my __main__.py and test the exit code? If so, how do I retrieve the exit code?
To be more clear, my __main__.py module is very simple. It just calls a function that has been extensively unit tested. But when I originally wrote __main__.py, I forgot to pass the result of that function to exit(), so I would like unit tests where the main function is mocked to make sure the exit code is set correctly. My unit test would look something like:
#patch('my_module.__main__.my_main', return_value=2)
def test_rc2(self, _):
"""Test that rc 2 is the exit code."""
sys.argv = ['arg0', 'arg1', 'arg2', …]
runpy.run_module('my_module')
self.assertEqual(mod_rc, 2)
My question is, how would I get what I’ve written here as ‘mod_rc’?
Thanks.
Misko Hevery has said before (I believe it was in Clean Code Talks: Don't Look for Things but I may be wrong) that he doesn't know how to effectively unit test main methods, so his solution is to make them so simple that you can prove logically that they work if you assume the correctness of the (unit-tested) code that they call.
For example, if you have a discrete, tested unit for parsing command line arguments; a library that does the actual work; and a discrete, tested unit for rendering the completed work into output, then a main method that calls all three of those in sequence is assuredly going to work.
With that architecture, you can basically get by with just one big system test that is expected to produce something other than the "default" output and it'll either crash (because you wired it up improperly) or work (because it's wired up properly and all of the individual parts work).
At this point, I'm dropping all pretense of knowing what I'm talking about. There is almost assuredly a better way to do this, but frankly you could just write a shell script:
python -m package args
test $? -eq [expected exit code]
That will exit with error iff your program outputs incorrectly, which TravisCI or similar will regard as build failing.
__main__.py is still subject to normal __main__ global behavior — which is to say, you can implement your __main__.py like so
def main():
# Your stuff
if __name__ == "__main__":
main()
and then you can test your __main__ in whatever testing framework you like by using
from your_package.__main__ import main
As an aside, if you are using argparse, you will probably want:
def main(arg_strings=None):
# …
args = parser.parse_args(arg_strings)
# …
if __name__ == "__main__":
main()
and then you can override arg strings from a unit test simply with
from your_package.__main__ import main
def test_main():
assert main(["x", "y", "z"]) == …
or similar idiom in you testing framework.
With pytest, I was able to do:
import mypkgname.__main__ as rtmain
where mypkgname is what you've named your app as a package/module. Then just running pytest as normal worked. I hope this helps some other poor soul.

What is the correct way to report an error in a Python unittest in the setUp method?

I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)

Python unittest: cancel all tests if a specific test fails

I am using unittest to test my Flask application, and nose to actually run the tests.
My first set of tests is to ensure the testing environment is clean and prevent running the tests on the Flask app's configured database. I'm confident that I've set up the test environment cleanly, but I'd like some assurance of that without running all the tests.
import unittest
class MyTestCase(unittest.TestCase):
def setUp(self):
# set some stuff up
pass
def tearDown(self):
# do the teardown
pass
class TestEnvironmentTest(MyTestCase):
def test_environment_is_clean(self):
# A failing test
assert 0 == 1
class SomeOtherTest(MyTestCase):
def test_foo(self):
# A passing test
assert 1 == 1
I'd like the TestEnvironmentTest to cause unittest or noseto bail if it fails, and prevent SomeOtherTest and any further tests from running. Is there some built-in method of doing so in either unittest (preferred) or nose that allows for that?
In order to get one test to execute first and only halt execution of the other tests in case of an error with that test, you'll need to put a call to the test in setUp() (because python does not guarantee test order) and then fail or skip the rest on failure.
I like skipTest() because it actually doesn't run the other tests whereas raising an exception seems to still attempt to run the tests.
def setUp(self):
# set some stuff up
self.environment_is_clean()
def environment_is_clean(self):
try:
# A failing test
assert 0 == 1
except AssertionError:
self.skipTest("Test environment is not clean!")
For your use case there's setUpModule() function:
If an exception is raised in a setUpModule then none of the tests in
the module will be run and the tearDownModule will not be run. If
the exception is a SkipTest exception then the module will be
reported as having been skipped instead of as an error.
Test your environment inside this function.
You can skip entire test cases by calling skipTest() in setUp(). This is a new feature in Python 2.7. Instead of failing the tests, it will simply skip them all.
I'm not quite sure whether it fits your needs, but you can make the execution of a second suite of unittests conditional on the result of a first suite of unittests:
envsuite = unittest.TestSuite()
moretests = unittest.TestSuite()
# fill suites with test cases ...
envresult = unittest.TextTestRunner().run(envsuite)
if envresult.wasSuccessful():
unittest.TextTestRunner().run(moretests)

Categories