I have a long run test, which lasts 2 days, which I don't want to include in a usual test run. I also don't want to type command line parameters, that would deselect it and other tests at every usual test run. I would prefer to select a default-deselected test, when I actually need it. I tried renaming the test from test_longrun to longrun and use the command
py.test mytests.py::longrun
but that does not work.
Alternatively to the pytest_configure solution above I had found pytest.mark.skipif.
You need to put pytest_addoption() into conftest.py
def pytest_addoption(parser):
parser.addoption('--longrun', action='store_true', dest="longrun",
default=False, help="enable longrundecorated tests")
And you use skipif in the test file.
import pytest
longrun = pytest.mark.skipif("not config.getoption('longrun')")
def test_usual(request):
assert False, 'usual test failed'
#longrun
def test_longrun(request):
assert False, 'longrun failed'
In the command line
py.test
will not execute test_longrun(), but
py.test --longrun
will also execute test_longrun().
try to decorate your test as #pytest.mark.longrun
in your conftest.py
def pytest_addoption(parser):
parser.addoption('--longrun', action='store_true', dest="longrun",
default=False, help="enable longrundecorated tests")
def pytest_configure(config):
if not config.option.longrun:
setattr(config.option, 'markexpr', 'not longrun')
This is a slightly different way.
Decorate your test with #pytest.mark.longrun:
#pytest.mark.longrun
def test_something():
...
At this point you can run everything except tests marked with that using -m 'not longrun'
$ pytest -m 'not longrun'
or if you only want to run the longrun marked tests,
$ pytest -m 'longrun'
But, to make -m 'not longrun' the default, in pytest.ini, add it to addopts:
[pytest]
addopts =
-m 'not longrun'
...
If you want to run all the tests, you can do
$ pytest -m 'longrun or not longrun'
Related
how to create a new pytest command line flag that takes in an argument.
For example I have the following:
test_A.py::test_a
#pytest.mark.lvl1
def test_a():...
.
.
.
test_B.py::test_b
#pytest.mark.lvl10
def test_b():...
test_C.py::test_c
#pytest.mark.lvl20
def test_c():...
On command line i only want to run tests with marker levels less than equal to lvl10 (so lvl1 to lvl10 tests)
How can i do this without having to manually type on commandline pytest -m 'lvl1 or lvl2 or lvl3 ...'
I want to create a new command line pytest arg like:
pytest --lte="lvl10"(lte is less than equal)
I was thinking somewhere along the lines where I want to define the --lte flag to do the following:
markers =[]
Do a pytest collect to find all tests that contain a marker that has 'lvl' in it and add that marker to the markers list only if the integer after 'lvl' is less than equal to 10 (lvl10). Then call a pytest -m on that list of markers ('lvl1 or lvl2 or lvl3 ...')
If you modify your marker to accept level as an argument, you can then run all tests less than or equal to the specified level, by adding a custom pytest_runtest_setup to your conftest.py
Sample Test
#pytest.mark.lvl(1)
def test_a():
...
conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--level", type=int, action="store", metavar="num",
help="only run tests matching the specified level or lower",
)
def pytest_configure(config):
# register the "lvl" marker
config.addinivalue_line(
"markers", "lvl(num): mark test to run only on named environment"
)
def pytest_runtest_setup(item):
test_level = next(item.iter_markers(name="lvl")).args[0]
req_level = item.config.getoption("--level")
if test_level > req_level:
pytest.skip(f"tests with level less than or equal to {req_level} was requested")
Sample invocation
pytest --level 10
I made a pytest which tests all files in given directory.
#pytest.mark.dir
def test_dir(target_dir):
for filename in os.listdir(target_dir):
test_single(filename)
def test_single(filename):
...
...
assert( good or bad )
The target_dir is supplied from command line:
pytest -m dir --target_dir=/path/to/my_dir
pytest_addoption() is used to parse the command line (code is ommited for clarity).
The output from the test gives single pass/fail mark even though test_single() runs hudreds of times. Would it be possible to get a pass/fail mark for each file?
I think the way to go is to parametrize your test function so that target_dir is effectively split into individual files in a fixture filename:
# conftest.py
import os
def pytest_addoption(parser):
parser.addoption("--target_dir", action="store")
def pytest_generate_tests(metafunc):
option_value = metafunc.config.option.target_dir
if "filename" in metafunc.fixturenames and option_value is not None:
metafunc.parametrize("filename", os.listdir(option_value))
# test.py
import pytest
#pytest.mark.dir
def test_file(filename):
# insert your assertions
pass
I have some tests I marked with an appropriate marker. If I run pytest, by default they run, but I would like to skip them by default. The only option I know is to explicitly say "not marker" at pytest invocation, but I would like them not to run by default unless the marker is explicitly asked at command line.
A slight modification of the example in Control skipping of tests according to command line option:
# conftest.py
import pytest
def pytest_collection_modifyitems(config, items):
keywordexpr = config.option.keyword
markexpr = config.option.markexpr
if keywordexpr or markexpr:
return # let pytest handle this
skip_mymarker = pytest.mark.skip(reason='mymarker not selected')
for item in items:
if 'mymarker' in item.keywords:
item.add_marker(skip_mymarker)
Example tests:
import pytest
def test_not_marked():
pass
#pytest.mark.mymarker
def test_marked():
pass
Running the tests with the marker:
$ pytest -v -k mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Or:
$ pytest -v -m mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Without the marker:
$ pytest -v
...
collected 2 items
test_spam.py::test_not_marked PASSED
test_spam.py::test_marked SKIPPED
...
Instead of explicitly say "not marker" at pytest invocation, you can add following to pytest.ini
[pytest]
addopts = -m "not marker"
I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True
I have added conftest.py at the same directory level as my test file . The content of my conftest.py :
import pytest
def pytest_addoption(parser):
parser.addoption("--l", action="store", help="Get license value")
#pytest.fixture
def lic(request):
return request.config.getoption("--l")
and following is my test file def
def test(lic):
print "testing license : %s"%lic
assert 0
But i still get following error:
pytest .\source\test_latestLinuxAgent.py --l=test
pytest : usage: pytest [options] [file_or_dir] [file_or_dir] [...]
At line:1 char:1
+ pytest .\source\test_latestLinuxAgent.py --l=test
pytest: error: ambiguous option: --l=test could match --lf, --last-failed
As the response says, --l option is ambiguous. What does it mean?
Let me explain it with an example. If you have --zaaaa option, the shortcut for it is --z. However, there is one condition: --zaaaa must be the only option starting with z char. Otherwise, the interpreter does not know which option should be chosen.
You can't define --l option because there are two options starting with l char: --lf and --last-failed.
I suggest creating non-conflicting option, --license would be nice.