pytest command line argument failing - python

I have added conftest.py at the same directory level as my test file . The content of my conftest.py :
import pytest
def pytest_addoption(parser):
parser.addoption("--l", action="store", help="Get license value")
#pytest.fixture
def lic(request):
return request.config.getoption("--l")
and following is my test file def
def test(lic):
print "testing license : %s"%lic
assert 0
But i still get following error:
pytest .\source\test_latestLinuxAgent.py --l=test
pytest : usage: pytest [options] [file_or_dir] [file_or_dir] [...]
At line:1 char:1
+ pytest .\source\test_latestLinuxAgent.py --l=test
pytest: error: ambiguous option: --l=test could match --lf, --last-failed

As the response says, --l option is ambiguous. What does it mean?
Let me explain it with an example. If you have --zaaaa option, the shortcut for it is --z. However, there is one condition: --zaaaa must be the only option starting with z char. Otherwise, the interpreter does not know which option should be chosen.
You can't define --l option because there are two options starting with l char: --lf and --last-failed.
I suggest creating non-conflicting option, --license would be nice.

Related

Possible to have pytest within pytest?

I made a pytest which tests all files in given directory.
#pytest.mark.dir
def test_dir(target_dir):
for filename in os.listdir(target_dir):
test_single(filename)
def test_single(filename):
...
...
assert( good or bad )
The target_dir is supplied from command line:
pytest -m dir --target_dir=/path/to/my_dir
pytest_addoption() is used to parse the command line (code is ommited for clarity).
The output from the test gives single pass/fail mark even though test_single() runs hudreds of times. Would it be possible to get a pass/fail mark for each file?
I think the way to go is to parametrize your test function so that target_dir is effectively split into individual files in a fixture filename:
# conftest.py
import os
def pytest_addoption(parser):
parser.addoption("--target_dir", action="store")
def pytest_generate_tests(metafunc):
option_value = metafunc.config.option.target_dir
if "filename" in metafunc.fixturenames and option_value is not None:
metafunc.parametrize("filename", os.listdir(option_value))
# test.py
import pytest
#pytest.mark.dir
def test_file(filename):
# insert your assertions
pass

How can I ensure tests with a marker are only run if explicitly asked in pytest?

I have some tests I marked with an appropriate marker. If I run pytest, by default they run, but I would like to skip them by default. The only option I know is to explicitly say "not marker" at pytest invocation, but I would like them not to run by default unless the marker is explicitly asked at command line.
A slight modification of the example in Control skipping of tests according to command line option:
# conftest.py
import pytest
def pytest_collection_modifyitems(config, items):
keywordexpr = config.option.keyword
markexpr = config.option.markexpr
if keywordexpr or markexpr:
return # let pytest handle this
skip_mymarker = pytest.mark.skip(reason='mymarker not selected')
for item in items:
if 'mymarker' in item.keywords:
item.add_marker(skip_mymarker)
Example tests:
import pytest
def test_not_marked():
pass
#pytest.mark.mymarker
def test_marked():
pass
Running the tests with the marker:
$ pytest -v -k mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Or:
$ pytest -v -m mymarker
...
collected 2 items / 1 deselected / 1 selected
test_spam.py::test_marked PASSED
...
Without the marker:
$ pytest -v
...
collected 2 items
test_spam.py::test_not_marked PASSED
test_spam.py::test_marked SKIPPED
...
Instead of explicitly say "not marker" at pytest invocation, you can add following to pytest.ini
[pytest]
addopts = -m "not marker"

How to set dynamic default parameters for py.test?

I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True

Default skip test unless command line parameter present in py.test

I have a long run test, which lasts 2 days, which I don't want to include in a usual test run. I also don't want to type command line parameters, that would deselect it and other tests at every usual test run. I would prefer to select a default-deselected test, when I actually need it. I tried renaming the test from test_longrun to longrun and use the command
py.test mytests.py::longrun
but that does not work.
Alternatively to the pytest_configure solution above I had found pytest.mark.skipif.
You need to put pytest_addoption() into conftest.py
def pytest_addoption(parser):
parser.addoption('--longrun', action='store_true', dest="longrun",
default=False, help="enable longrundecorated tests")
And you use skipif in the test file.
import pytest
longrun = pytest.mark.skipif("not config.getoption('longrun')")
def test_usual(request):
assert False, 'usual test failed'
#longrun
def test_longrun(request):
assert False, 'longrun failed'
In the command line
py.test
will not execute test_longrun(), but
py.test --longrun
will also execute test_longrun().
try to decorate your test as #pytest.mark.longrun
in your conftest.py
def pytest_addoption(parser):
parser.addoption('--longrun', action='store_true', dest="longrun",
default=False, help="enable longrundecorated tests")
def pytest_configure(config):
if not config.option.longrun:
setattr(config.option, 'markexpr', 'not longrun')
This is a slightly different way.
Decorate your test with #pytest.mark.longrun:
#pytest.mark.longrun
def test_something():
...
At this point you can run everything except tests marked with that using -m 'not longrun'
$ pytest -m 'not longrun'
or if you only want to run the longrun marked tests,
$ pytest -m 'longrun'
But, to make -m 'not longrun' the default, in pytest.ini, add it to addopts:
[pytest]
addopts =
-m 'not longrun'
...
If you want to run all the tests, you can do
$ pytest -m 'longrun or not longrun'

How and where does py.test find fixtures

Where and how does py.test look for fixtures? I have the same code in 2 files in the same folder. When I delete conftest.py, cmdopt cannot be found running test_conf.py (also in same folder. Why is sonoftest.py not searched?
# content of test_sample.py
def test_answer(cmdopt):
if cmdopt == "type1":
print ("first")
elif cmdopt == "type2":
print ("second")
assert 0 # to see what was printed
content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--cmdopt", action="store", default="type1",
help="my option: type1 or type2")
#pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
content of sonoftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--cmdopt", action="store", default="type1",
help="my option: type1 or type2")
#pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
The docs say
http://pytest.org/latest/fixture.html#fixture-function
pytest finds the test_ehlo because of the test_ prefix. The test function needs a function argument named smtp. A matching fixture
function is discovered by looking for a fixture-marked function named
smtp.
smtp() is called to create an instance.
test_ehlo() is called and fails in the last line of the test function.
py.test will import conftest.py and all Python files that match the python_files pattern, by default test_*.py. If you have a test fixture, you need to include or import it from conftest.py or from the test files that depend on it:
from sonoftest import pytest_addoption, cmdopt
Here is the order and where py.test looks for fixtures (and tests) (taken from here):
py.test loads plugin modules at tool startup in the following way:
by loading all builtin plugins
by loading all plugins registered through setuptools entry points.
by pre-scanning the command line for the -p name option and loading the specified plugin before actual command line parsing.
by loading all conftest.py files as inferred by the command line invocation (test files and all of its parent directories). Note that
conftest.py files from sub directories are by default not loaded at
tool startup.
by recursively loading all plugins specified by the pytest_plugins variable in conftest.py files
I had the same issue and spent a lot of time to find out a simple solution, this example is for others that have a similar situation as I had.
conftest.py:
import pytest
pytest_plugins = [
"some_package.sonoftest"
]
def pytest_addoption(parser):
parser.addoption("--cmdopt", action="store", default="type1",
help="my option: type1 or type2")
#pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
some_package/sonoftest.py:
import pytest
#pytest.fixture
def sono_cmdopt(request):
return request.config.getoption("--cmdopt")
some_package/test_sample.py
def test_answer1(cmdopt):
if cmdopt == "type1":
print ("first")
elif cmdopt == "type2":
print ("second")
assert 0 # to see what was printed
def test_answer2(sono_cmdopt):
if sono_cmdopt == "type1":
print ("first")
elif sono_cmdopt == "type2":
print ("second")
assert 0 # to see what was printed
You can find a similar example here: https://github.com/pytest-dev/pytest/issues/3039#issuecomment-464489204
and other here https://stackoverflow.com/a/54736376/6655459
Description from official pytest documentation: https://docs.pytest.org/en/latest/reference.html?highlight=pytest_plugins#pytest-plugins
As a note that the respective directories referred to in
some_package.test_sample" need to have __init__.py files for the plugins to be loaded by pytest

Categories