How to disable pytest plugins for single tests - python

I've installed new pytest plugin (pytest-catchlog==1.2.2) and as much as I like it, it breaks my unit tests for logging module (e.g ValueError: I/O operation on closed file).
I would like to disable that plugin for test_logging.py file (or even a class or method), but I can't find any information on it.
The only option I found so far is to execute pytest twice: first time for test_logging.py onlywith catchlog disabled (py.test -p no:catchlog test_logging.py), and second time for all other test files.
Please let me know if I missed a pytest decorator, or any other way of disabling plugins in runtime.

You cannot selectively disable arbitrary plugins for selected tests. The plugins are loaded at a much earlier stage — when the pytest starts. And the plugins actually define what pytest does and how (i.e., command line options, test collection, filtering, etc).
In other words, it is too late to redefine the pytest's internal structure when it gets to the test execution.
Your best case is, indeed, to mark your tests with #pytest.mark.nocatchlog, and execute them separately:
pytest -m 'nocatchlog' -p no:catchlog # problematic tests with no plugin
pytest -m 'not nocatchlog` # all other tests
If those tests not under your control, i.e. if you cannot add marks, then you can only filter by expressions like -k test_logging or -k 'not test_logging' (i.e. by part of their node id).
Specifically for this pytest-catchlog plugin, you can make the same hooks as it does, and remove its log handler from the root logger (assuming that no other loggers were used explicitly):
conftest.py:
import pytest
def _disable_catchlog(item):
logger = logging.getLogger()
if item.catch_log_handler in logger.handlers:
logger.handlers.remove(item.catch_log_handler)
#pytest.hookimpl(hookwrapper=True, trylast=True)
def pytest_runtest_setup(item):
_disable_catchlog(item)
yield
#pytest.hookimpl(hookwrapper=True, trylast=True)
def pytest_runtest_call(item):
_disable_catchlog(item)
yield
#pytest.hookimpl(hookwrapper=True, trylast=True)
def pytest_runtest_teardown(item):
_disable_catchlog(item)
yield

Related

pytest - is it possible to run a script/command between all test scripts?

OK, this is definitely my fault but I need to clean it up. One of my test scripts fairly consistently (but not always) updates my database in a way that causes problems for the others (basically, it takes away access rights, for the test user, to the test database).
I could easily find out which script is causing this by running a simple query, either after each individual test, or after each test script completes.
i.e. pytest, or nose2, would do the following:
run test_aaa.py
run check_db_access.py #ideal if I could induce a crash/abort
run test_bbb.py
run check_db_access.py
...
You get the idea. Is there a built-in option or plugin that I can use? The test suite currently works on both pytest and nose2 so either is an option.
Edit: this is not a test db, or a fixture-loaded db. This is a snapshot of any of a number of extremely complex live databases and the test suite, as per its design, is supposed to introspect the database(s) and figure out how to run its tests (almost all access is read-only). This works fine and has many beneficial aspects at least in my particular context, but it also means there is no tearDown or fixture-load for me to work with.
import pytest
#pytest.fixture(autouse = True)
def wrapper(request):
print('\nbefore: {}'.format(request.node.name))
yield
print('\nafter: {}'.format(request.node.name))
def test_a():
assert True
def test_b():
assert True
Example output:
$ pytest -v -s test_foo.py
test_foo.py::test_a
before: test_a
PASSED
after: test_a
test_foo.py::test_b
before: test_b
PASSED
after: test_b

Perform sanity check before running tests with pytest

I would like to perform some sanity check when I run tests using pytest. Typically, I want to check that some executables are accessible to the tests, and that the options provided by the user on the command-line are valid.
The closest thing I found was to use a fixture such as:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
sys.exit(0)
But this still runs all the tests. I'd like for the script to fail before attempting to run the tests.
You shouldn't need to validate the command line options explicitly; this will be done by the arg parser which will abort the execution early if necessary. As for conditions checking, you are not far from the solution. Use
pytest.exit to to an immediate abort
pytest.skip to skip all tests
pytest.xfail to fail all tests (this is an expected failure though, so it won't mark the whole execution as failed)
Example fixture:
#pytest.fixture(scope='session', autouse=True)
def precondition():
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
# or skip each test
# pytest.skip('Install spam before running this test suite.')
# or make it an expected failure
# pytest.xfail('Install spam before running this test suite.')
xdist compatibility
Invoking pytest.exit() in the test run with xdist will only crash the current worker and will not abort the main process. You have to move the check to a hook that is invoked before the runtestloop starts (so anything before the pytest_runtestloop hook), for example:
# conftest.py
def pytest_sessionstart(session):
if not shutil.which('spam'):
# immediate shutdown
pytest.exit('Install spam before running this test suite.')
If you want to run sanity check before whole test scenario then you can use conftest.py file - https://docs.pytest.org/en/2.7.3/plugins.html?highlight=re
Just add your function with the same scope and autouse option to conftest.py:
#pytest.fixture(scope="session", autouse=True)
def sanity_check(request):
if not good:
pytest.exit("Error message here")

Python - Using pytest to skip test unless specified

Background
I have am using pytest to test a web scraper that pushes the data to a database. The class only pulls the html and pushes the html to a database to be parsed later. Most of my tests use dummy data to represent the html.
Question
I want to do a test where a webpage from the website is scraped but I want the test to be automatically turned off unless specified. A similar scenario could be if you have an expensive or time consuming test that you do not want to always run.
Expected Solution
I am expecting some kind of marker that suppresses a test unless I give pytest to run all suppressed tests, but I do not see that in the documentation.
What I have done
I am currently using the skip marker and comment it out.
Tried to use the skipif marker and and give arguments to python script using this command from command prompt pytest test_file.py 1 and the following code below in the test file. The problem is that when I try to provide an argument to my test_file, pytest is expecting that to be another file name so I get an error "no tests run in 0.00 seconds, ERROR: file not found: 1"
if len(sys.argv) == 1:
RUN_ALL_TESTS = False
else:
RUN_ALL_TESTS = True
...
# other tests
...
#pytest.mark.skipif(RUN_ALL_TESTS)
def test_scrape_website():
...
I might be able to treat the test as a fixture and use #pytest.fixture(autouse=False), not sure how to override the autouse variable though
A similar solution was stated in How to skip a pytest using an external fixture? but this solutions seems more complicated than what I need.
The docs describe exactly your problem: https://docs.pytest.org/en/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option. Copying from there:
Here is a conftest.py file adding a --runslow command line option to
control skipping of pytest.mark.slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now write a test module like this:
# content of test_module.py
import pytest
def test_func_fast():
pass
#pytest.mark.slow
def test_func_slow():
pass
There's a couple ways to handle this, but I'll go over two common approaches I've seen in Python baselines.
1) Separate your tests by putting the "optional" tests in another directory.
Not sure what your project layout looks like, but you can do something like this (only the test directory is important, the rest is just a toy example layout):
README.md
setup.py
requirements.txt
test/
unit/
test_something.py
test_something_else.py
integration/
test_optional.py
application/
__init__.py
some_module.py
Then, when you invoke pytest, you invoke it by doing pytest test/unit if you want to run just the unit tests (i.e. only test_something*.py files), or pytest test/integration if you want to run just the integration tests (i.e. only test_optional.py), or pytest test if you want to run all the tests. So, by default, you can just run pytest test/unit.
I recommend wrapping these calls in some sort of script. I prefer make since it is powerful for this type of wrapping. Then you can say make test and it just runs your default (fast) test suite, or make test_all, and it'll run all the tests (which may or may not be slow).
Example Makefile you could wrap with:
.PHONY: all clean install test test_int test_all uninstall
all: install
clean:
rm -rf build
rm -rf dist
rm -rf *.egg-info
install:
python setup.py install
test: install
pytest -v -s test/unit
test_int: install
pytest -v -s test/integration
test_all: install
pytest -v -s test
uninstall:
pip uninstall app_name
2) Mark your tests judiciously with the #pytest.mark.skipif decorator, but use an environment variable as the trigger
I don't like this solution as much, it feels a bit haphazard to me (it's hard to tell which set of tests are being run on any give pytest run). However, what you can do is define an environment variable and then rope that environment variable into the module to detect if you want to run all your tests. Environment variables are shell dependent, but I'll pretend you have a bash environment since that's a popular shell.
You could do export TEST_LEVEL="unit" for just fast unit tests (so this would be your default), or export TEST_LEVEL="all" for all your tests. Then in your test files, you can do what you were originally trying to do like this:
import os
...
#pytest.mark.skipif(os.environ["TEST_LEVEL"] == "unit")
def test_scrape_website():
...
Note: Naming the test levels "unit" and "integration" is irrelevant. You can name them whatever you want. You can also have many many levels (like maybe nightly tests or performance tests).
Also, I think option 1 is the best way to go, since it not only clearly allows separation of testing, but it can also add semantics and clarity to what the tests mean and represent. But there is no "one size fits all" in software, you'll have to decide what approach you like based on your particular circumstances.
HTH!
A very simply solution is to use the -k argument. You can use the -k parameter to deselect certain tests. -k tries to match its argument to any part of the tests name or markers You can invert the match by using not (you can also use the boolean operators and and or). Thus -k 'not slow' skips tests which have "slow" in the name, has a marker with "slow" in the name, or whose class/module name contains "slow".
For example, given this file:
import pytest
def test_true():
assert True
#pytest.mark.slow
def test_long():
assert False
def test_slow():
assert False
When you run:
pytest -k 'not slow'
It outputs something like: (note that both failing tests were skipped as they matched the filter)
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0
rootdir: c:\Users\User\Documents\python, inifile:
collected 3 items
test_thing.py . [100%]
============================= 2 tests deselected ==============================
=================== 1 passed, 2 deselected in 0.02 seconds ====================
Because of the eager matching you might want to do something like putting all your unittests in a directory called unittest and then marking the slow ones as slow_unittest (so as to to accidentally match a test that just so happens to have slow in the name). You could then use -k 'unittest and not slow_unittest' to match all your quick unit tests.
More pytest example marker usage
Form a little class for reuse of #xverges code on multiple marks/cli options;
#dataclass
class TestsWithMarkSkipper:
''' Util to skip tests with mark, unless cli option provided. '''
test_mark: str
cli_option_name: str
cli_option_help: str
def pytest_addoption_hook(self, parser):
parser.addoption(
self.cli_option_name,
action="store_true",
default=False,
help=self.cli_option_help,
)
def pytest_collection_modifyitems_hook(self, config, items):
if not config.getoption(self.cli_option_name):
self._skip_items_with_mark(items)
def _skip_items_with_mark(self, items):
reason = "need {} option to run".format(self.cli_option_name)
skip_marker = pytest.mark.skip(reason=reason)
for item in items:
if self.test_mark in item.keywords:
item.add_marker(skip_marker)
Usage example (must be put in conftest.py):
slow_skipper = TestsWithMarkSkipper(
test_mark='slow',
cli_option_name="--runslow",
cli_option_help="run slow tests",
)
pytest_addoption = slow_skipper.pytest_addoption_hook
pytest_collection_modifyitems = slow_skipper.pytest_collection_modifyitems_hook

PyCharm + cProfile + py.test --> pstat snapshot view + call graph are empty

In PyCharm, I set up py.test as the default test runner.
I have a simple test case:
import unittest
import time
def my_function():
time.sleep(0.42)
class MyTestCase(unittest.TestCase):
def test_something(self):
my_function()
Now I run the test by right-clicking the file and choosing Profile 'py.test in test_profile.py'.
I see the test running successfully in the console (it says collected 1 items). However, the Statistics/Call Graph view showing the generated pstat file is empty and says Nothing to show.
I would expect to see profiling information for the test_something and my_function. What am I doing wrong?
Edit 1:
If I change the name of the file to something which does not start with test_, remove the unittest.TestCase and insert a __main__ method calling my_function, I can finally run cProfile without py.test and I see results.
However, I am working on a large project with tons of tests. I would like to directly profile these tests instead of writing extra profiling scripts. Is there a way to call the py.test test-discovery module so I can retrieve all tests of the project recursively? (the unittest discovery will not suffice since we yield a lot of parametrized tests in generator functions which are not recognized by unittest). This way I could at least solve the problem with only 1 additional script.
Here is a work-around. Create an additional python script with the following contents (adapt the path to the tests-root accordingly):
import os
import pytest
if __name__ == '__main__':
source_dir = os.path.dirname(os.path.abspath(__file__))
test_dir = os.path.abspath(os.path.join(source_dir, "../"))
pytest.main(test_dir, "setup.cfg")
The script filename must not start with test_, else pycharm will force you to run it with py.test. Then right-click the file and run it with Profile.
This also comes in handy for running it with Coverage.

python nosetests equivalent of unittest Testsuite in the test file

In nosetests, I know that you can specify which tests you want to run via a nosetests config file as such:
[nosetests]
tests=testIWT_AVW.py:testIWT_AVW.tst_bynd1,testIWT_AVW.py:testIWT_AVW.tst_bynd3
However, the above just looks messy and becomes harder to maintain when a lot of tests are added, especially without being able to use linebreaks. I found it a lot more convenient to be able to specify which tests I want to run using unittests TestSuite feature. e.g.
def custom_suite():
suite = unittest.TestSuite()
suite.addTest(testIWT_AVW('tst_bynd1'))
suite.addTest(testIWT_AVW('tst_bynd3'))
return suite
if __name__=="__main__":
runner = unittest.TextTestRunner()
runner.run(custom_suite())
Question: How do I specify which tests should be run by nosetests within my .py file? Thanks.
P.S. If there is a way to specify tests via a nosetest config file that doesn't force all tests to be written on one line I would be open to it as well, as a second alternative
I'm not entirely sure whether you want to run the tests programmatically or from the command line. Either way this should cover both:
import itertools
from nose.loader import TestLoader
from nose import run
from nose.suite import LazySuite
paths = ("/path/to/my/project/module_a",
"/path/to/my/project/module_b",
"/path/to/my/project/module_c")
def run_my_tests():
all_tests = ()
for path in paths:
all_tests = itertools.chain(all_tests, TestLoader().loadTestsFromDir(path))
suite = LazySuite(all_tests)
run(suite=suite)
if __name__ == '__main__':
run_my_tests()
Note that the nose.suite.TestLoader object has a number of different methods available for loading tests.
You can call the run_my_tests method from other code or you can run this from the command line with a python interpreter, rather than through nose. If you have other nose configuration, you may need to pass that in programmatically as well.
If I'm correctly understanding your question, you have several options here:
you can mark your tests with special nose decorators: istest and nottest. See docs
you can mark tests with tags
you can join test cases in test suites. I haven't used it by myself, but it seems that you have to override nose's default test discovery to respect your test suites (see docs)
Hope that helps.

Categories