Using TestInfra with Ansible backend for testing purposes. Everything goes fine except using Ansible itself while running tests
test.py
import pytest
def test_zabbix_agent_package(host):
package = host.package("zabbix-agent")
assert package.is_installed
package_version = host.ansible("debug", "msg={{ zabbix_agent_version }}")["msg"]
(...)
where zabbix_agent_version is an Ansible variable from group_vars. It can be obtained by running this playbook
- hosts: all
become: true
tasks:
- name: debug
debug: msg={{ zabbix_agent_version }}
command executing tests
pytest --connection=ansible --ansible-inventory=inventory --hosts=$hosts -v test.py
ansible.cfg
[defaults]
timeout = 10
host_key_checking = False
library=library/
retry_files_enabled = False
roles_path=roles/
pipelining=true
ConnectTimeout=60
remote_user=deploy
private_key_file=/opt/jenkins/.ssh/deploy
the output I get is
self = <ansible>, module_name = 'debug', module_args = 'msg={{ zabbix_agent_version }}', check = True, kwargs = {}
result = {'failed': True, 'msg': "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'zabbix_agent_version' is undefined"}
def __call__(self, module_name, module_args=None, check=True, **kwargs):
if not self._host.backend.HAS_RUN_ANSIBLE:
raise RuntimeError((
"Ansible module is only available with ansible "
"connection backend"))
result = self._host.backend.run_ansible(
module_name, module_args, check=check, **kwargs)
if result.get("failed", False) is True:
> raise AnsibleException(result)
E AnsibleException: Unexpected error: {'failed': True,
E 'msg': u"the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'zabbix_agent_version' is undefined"}
/usr/lib/python2.7/site-packages/testinfra/modules/ansible.py:70: AnsibleException
Any idea why Ansible can't see this variable when running testinfra's Ansible module while it can see it while running Ansible alone?
If zabbix_agent_version is a variable set using group_vars, then it seems as if you should be accessing it using host.ansible.get_variables() rather than running debug task. In any case, both should work. If I have, in my current directory:
test_myvar.py
group_vars/
all.yml
And in group_vars/all.yml I have:
myvar: value
And in test_myvar.py I have:
def test_myvar_using_get_variables(host):
all_variables = host.ansible.get_variables()
assert 'myvar' in all_variables
assert all_variables['myvar'] == 'myvalue'
def test_myvar_using_debug_var(host):
result = host.ansible("debug", "var=myvar")
assert 'myvar' in result
assert result['myvar'] == 'myvalue'
def test_myvar_using_debug_msg(host):
result = host.ansible("debug", "msg={{ myvar }}")
assert 'msg' in result
assert result['msg'] == 'myvalue'
Then all tests pass:
$ py.test --connection=ansible --ansible-inventory=hosts -v
test_myvar.py
============================= test session starts ==============================
platform linux2 -- Python 2.7.13, pytest-3.2.3, py-1.4.34, pluggy-0.4.0 -- /home/lars/env/common/bin/python2
cachedir: .cache
rootdir: /home/lars/tmp/testinfra, inifile:
plugins: testinfra-1.8.1.dev2
collected 3 items
test_myvar.py::test_myvar_using_get_variables[ansible://localhost] PASSED
test_myvar.py::test_myvar_using_debug_var[ansible://localhost] PASSED
test_myvar.py::test_myvar_using_debug_msg[ansible://localhost] PASSED
=========================== 3 passed in 1.77 seconds ===========================
Can you confirm that the layout of our files (in particular, the location of your group_vars directory relative to the your tests) matches what I've shown here?
I chased an answer to this for days. Here's what finally worked for me. Essentially you are using testinfra's Ansible module to access the include_vars function of Ansible.
import pytest
#pytest.fixture()
def AnsibleVars(host):
ansible_vars = host.ansible(
"include_vars", "file=./group_vars/all/vars.yml")
return ansible_vars["ansible_facts"]
Then in my tests, I included the function as a parameter:
def test_something(host, AnsibleVars):
This solution was taken partially from https://github.com/metacloud/molecule/issues/151
I had an interesting issue where I was trying to include the variables from my main playbook and I was receiving an error of "must be stored as a dictionary/hash" when including the playbook.yml file. Separating the variables out into the group_vars/all/vars.yml file resolved that error.
Related
I'm attempting to write a test fixture based on randomly generated data. This randomly generated data needs to be able to accept a seed so that we can generate the same data on two different computers at the same time.
I'm using the pytest parse.addoption fixture (I think it's a fixture) to add this ability.
My core issue is that I'd like to be able to parameterize a randomly generated list that uses a fixture as an argument.
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(10))
#fixture(scope=session)
def seed(pytestconfig):
return pytestconfig.getoption("seed")
#fixture(scope=session)
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize(group_item=test_context["group_items"])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
Leads to this result.
% pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{Home}/tests, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:12: in <module>
???
E TypeError: 'function' object is not subscriptable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not subscriptable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.18s ============================================================================================================================================================================
I think I might be able work around this issue by making an empty test that leaks the test_context up to the global scope but that feels really really brittle. I'm looking for another method to still be able to
Use the seed fixture to generate data
Generate one test per element in the generated list
Not depend on the order in which the tests are run.
Edit
Here's an example of this not working with straight pytest
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture
def seed():
return 1
#fixture
def test_context(seed):
return [seed, 'a', 'test', 'list']
#pytest.fixture(params=test_context)
def example_fixture(request):
return request.param
def test_reconciliation(example_fixture) -> None:
print(example_fixture)
assert False
pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{HOME}/tests/integration, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:14: in <module>
???
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1327: in fixture
fixture_marker = FixtureFunctionMarker(
<attrs generated init _pytest.fixtures.FixtureFunctionMarker>:5: in __init__
_inst_dict['params'] = __attr_converter_params(params)
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1159: in _params_converter
return tuple(params) if params is not None else None
E TypeError: 'function' object is not iterable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not iterable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.23s ======================================================================================================================================================================
I tried your code with testfile and conftest.py
conftest.py
import pytest
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
# If you add a breakpoint() here it'll never be hit.
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# This line throws an exception since seed was never added.
return pytestconfig.getoption("seed")
myso_test.py
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert True
Test Run:
PS C:\Users\AB45365\PycharmProjects\SO> pytest .\myso_test.py -s -v --seed=10
============================================================== test session starts ==============================================================
platform win32 -- Python 3.9.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- c:\users\ab45365\appdata\local\programs\python\python39\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\AB45365\PycharmProjects\SO
plugins: cases-3.6.11, lazy-fixture-0.6.3
collected 1 item
myso_test.py::test_example[group_item-test_context] PASSED
To complement Devang Sanghani's answer : as of pytest 7.1, pytest_addoption is a pytest plugin hook. So, as for all other plugin hooks, it can only be present in plugin files or in conftest.py.
See the note in https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.hookspec.pytest_addoption :
This function should be implemented only in plugins or conftest.py
files situated at the tests root directory due to how pytest discovers
plugins during startup.
This issue is therefore not related to pytest-cases.
After doing some more digging I ran into this documentation around pytest-cases
from secrets import randbelow
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# return pytestconfig.getoption("seed")
return 1
#pytest.fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
This unfortunately ran me into a new problem. Looks like pytest-cases doesn't currently call pytest_addoption during the fixture execution step rihgt now. I created this ticket to cover this case but this does effectively solve my original question even if it has a caveat.
Is the following nested structure discoverable by unittest ?
class HerclTests(unittest.TestCase):
def testJobs(self):
def testJobSubmit():
jid = "foobar"
assert jid,'hercl job submit failed no job_id'
return jid
def testJobShow(jid):
jid = "foobar"
out,errout=bash(f"hercl job show --jid {jid} --form json")
assert 'Job run has been accepted by airflow successfully' in out,'hercl job show failed'
Here is the error when trying to run unittest :
============================= test session starts ==============================
platform darwin -- Python 3.6.7, pytest-5.4.3, py-1.10.0, pluggy-0.13.1 -- /Users/steve/git/hercl/.venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/steve/git/hercl/tests
collecting ... collected 0 items
ERROR: not found: /Users/steve/git/hercl/tests/hercl_flow_test.py::HerclTests::testJobs::testJobSubmit
(no name '/Users/steve/git/hercl/tests/hercl_flow_test.py::HerclTests::testJobs::testJobSubmit' in any of [<TestCaseFunction testJobs>])
============================ no tests ran in 0.01s =============================
Can this structure be tweaked to work with unittest or must each test method be elevated to the level of the HerclTests class?
This can't work - functions defined inside another function ("inner functions") only "exist" as variables inside the outer function's local scope. They're not accessible to any other code. The unittest discovery won't find them, and couldn't call them even if it knew about them.
I'm parameterizing pytest tests with variables defined in an external YAML file using the pytest_generate_tests hook. The name of the variable file is specified on the pytest command line (--params_file). Only some of the test functions within a module are parameterized and require the variables in this file. Thus, the command line option defining the variables is an optional argument. If the optional argument is omitted from the command line, then I want pytest to just "skip" those test functions which need the external parameterized variables and just run the "other" tests which are not parameterized. The problem is, if the command line option is omitted, pytest is skipping ALL of the test functions, not just the test functions that require the parameters.
Here is the test module file:
def test_network_validate_1(logger, device_connections,):
### Test code omitted.....
def test_lsp_throttle_timers(params_file, logger, device_connections):
### Test code omitted.....
def test_network_validate_2(logger, device_connections,):
### Test code omitted.....
pytest_generate_tests hook in conftest.py:
# Note, I tried scope at function level as well but that did not help
#pytest.fixture(scope='session')
def params_file(request):
pass
def pytest_generate_tests(metafunc):
### Get Pytest rootdir
rootdir = metafunc.config.rootdir
print(f"*********** Test Function: {metafunc.function.__name__}")
if "params_file" in metafunc.fixturenames:
print("*********** Hello Silver ****************")
if metafunc.config.getoption("--params_file"):
#################################################################
# Params file now located as a separate command line argument for
# greater flexibility
#################################################################
params_file = metafunc.config.getoption("--params_file")
params_doc = dnet_generic.open_yaml_file(Path(rootdir, params_file),
loader_type=yaml.Loader)
test_module = metafunc.module.__name__
test_function = metafunc.function.__name__
names,values = dnet_generic.get_test_parameters(test_module,
test_function,
params_doc,)
metafunc.parametrize(names, values )
else:
pytest.skip("This test requires the params_file argument")
When the params_file option is present, everything works fine:
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --params_file common/topoA_params.yml --collect-only
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items *********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Test Function: test_network_validate_2
collected 3 items
<Package '/home/as2863/pythonProjects/p1-automation/isis'>
<Module 'test_isis_lsp_throttle.py'>
<Function 'test_network_validate_1'>
<Function 'test_lsp_throttle_timers'>
<Function 'test_network_validate_2'>
================================================================================ no tests ran in 0.02 seconds =================================================================================
When the params_file option is ommitted, you can see that no tests are run and the print statement shows it does not even try to run pytest_generate_tests on "test_network_validate_2"
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --collect-only ===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items
*********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Hello Silver ****************
collected 0 items / 1 skipped
================================================================================== 1 skipped in 0.11 seconds ==================================================================================
As has been found in the discussion in the comments, you cannot use pytest.skip in pytest_generate_tests, because it will work on module scope. To skip the concrete test, you can do something like this:
#pytest.fixture
def skip_test():
pytest.skip('Some reason')
def pytest_generate_tests(metafunc):
if "params_file" in metafunc.fixturenames:
if metafunc.config.getoption("--params_file"):
...
metafunc.parametrize(names, values )
else:
metafunc.fixturenames.insert(0, 'skip_test')
E.g. you introduce a fixture that will skip the concrete test, and add this fixture to the test. Make sure to insert it as the first fixture, so no other fixtures will be executed.
While MrBean Bremen's answer may work, according to the pytest authors dynamically altering the fixture list is not something they really want to support. This approach, however, is a bit more supported.
# This is "auto used", but doesn't always skip the test unless the test parameters require it
#pytest.fixture(autouse=True)
def skip_test(request):
# For some reason this is only conditionally set if a param is passed
# https://github.com/pytest-dev/pytest/blob/791b51d0faea365aa9474bb83f9cd964fe265c21/src/_pytest/fixtures.py#L762
if not hasattr(request, 'param'):
return
pytest.skip(f"Test skipped: {request.param}")
And in your test module:
def _add_flag_parameter(metafunc: pytest.Metafunc, name: str):
if name not in metafunc.fixturenames:
return
flag_value = metafunc.config.getoption(name)
if flag_value:
metafunc.parametrize(name, [flag_value])
else:
metafunc.parametrize("skip_test", ["Missing flag '{name}'"], indirect=True)
def pytest_generate_tests(metafunc: pytest.Metafunc):
_add_flag_parameter(metafunc, "params_file")
I would like to add Ansible module locally. The module should use external Python library. I have added just the following line:
from ansible.module_utils.foo import Bar
to the Ansible new module template making it look like below:
my_test.py
#!/usr/bin/python
# Copyright: (c) 2018, Terry Jones <terry.jones#example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: my_test
short_description: This is my test module
version_added: "2.4"
description:
- "This is my longer description explaining my test module"
options:
name:
description:
- This is the message to send to the test module
required: true
new:
description:
- Control to demo if the result of this module is changed or not
required: false
extends_documentation_fragment:
- azure
author:
- Your Name (#yourhandle)
'''
EXAMPLES = '''
# Pass in a message
- name: Test with a message
my_test:
name: hello world
# pass in a message and have changed true
- name: Test with a message and changed output
my_test:
name: hello world
new: true
# fail the module
- name: Test failure of the module
my_test:
name: fail me
'''
RETURN = '''
original_message:
description: The original name param that was passed in
type: str
returned: always
message:
description: The output message that the test module generates
type: str
returned: always
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.foo import Bar
def run_module():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True),
new=dict(type='bool', required=False, default=False)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_message='',
message=''
)
# the AnsibleModule object will be our abstraction working with Ansible
# this includes instantiation, a couple of common attr would be the
# args/params passed to the execution, as well as if the module
# supports check mode
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
module.exit_json(**result)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_message'] = module.params['name']
result['message'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
if module.params['new']:
result['changed'] = True
# during the execution of the module, if there is an exception or a
# conditional state that effectively causes a failure, run
# AnsibleModule.fail_json() to pass in the message and the result
if module.params['name'] == 'fail me':
module.fail_json(msg='You requested this to fail', **result)
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
I haven't introduced any changes in Ansible playbook template. I paste playbook here for neatness:
testmod.yml
- name: test my new module
hosts: localhost
tasks:
- name: run the new module
my_test:
name: 'hello'
new: true
register: testout
- name: dump test output
debug:
msg: '{{ testout }}'
I have put module my_test.py in the following localization:
~/.ansible/plugins/modules
I have extended ANSIBLE_MODULE_UTILS environment variable making foo library visible. Generally, leaving aside Ansible, parts of foo library may be imported to the Python script in the following way:
from foo import Bar
I have tested, that when e.g. foo is a Python script and Bar is a class inside that script, the testmod.yml playbook may be run correctly. The problem is that foo is a directory, there is not foo.py file, nor Bar.py. In my case, when I run testmod.yml, I receive traceback:
ImportError: No module named foo.config
Could you tell me what should I do to be able to use foo external library in my local Ansible module?
I have a framework which working under py.test. py.test can generate beauty reports with params --html and --junitxml. But clients that using my framework not always type this params to command line where they using py.test. I want make py.test to generate reports always when the py.test used with my framework. And i want to put this reports with log folder. So i need to generate path for report in runtime. Can i do this by fixtures? Or maybe by the plugin API?
Putting this in conftest.py will suffice:
def pytest_configure(config):
if config.option.xmlpath is None:
config.option.xmlpath = get_custom_xml_path() # implement this
The accepted answer is probably a bit more complicated than necessary for most people for a few reasons:
The decorator doesn't help. It doesn't matter when this executes.
There is no need make a custom LogXML since you can just set the property here and it will be used.
slaveinput is specific to a pytest plugin xdist. I don't think there is any need to check for that, especially if you don't use xdist.
First of all, if you want to implicitly add the command line args to pytest, you can use the pytest.ini placed in the tests root dir with the addopts config value:
[pytest]
addopts=--verbose --junit-xml=/tmp/myreport.xml # etc
Of course, if you want to dynamically calculate the directory to store the reports, then you can't put it in the config and will need to extend pytest. The best spot would be the pytest_configure hook. Example:
# conftest.py
import tempfile
import pytest
from _pytest.junitxml import LogXML
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
if config.option.xmlpath: # was passed via config or command line
return # let pytest handle it
if not hasattr(config, 'slaveinput'):
with tempfile.NamedTemporaryFile(suffix='.xml') as tmpfile:
xmlpath = tmpfile.name
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
If you remove the first if block, then pytest will completely ignore --junit-xml arg passed via command line or in addopts value in config.
Example run:
$ pytest
=================================== test session starts ====================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48320357, inifile:
plugins: forked-0.2, asyncio-0.8.0, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 1 item
test_spam.py . [100%]
--- generated xml file: /var/folders/_y/2qk6029j4c7bwv0ddk3p96r00000gn/T/tmp1tleknm3.xml ---
================================ 1 passed in 0.01 seconds ==================================
The xml report is now put in a tempfile.
Configure pytest.ini file with parameters:
# content of pytest.ini
[pytest]
addopts = --html=report.html --self-contained-html
;addopts = -vv -rw --html=./results/report.html --self-contained-html
#hoefling's answer worked perfectly for me in conftest.py. the code looks simpler there.
def pytest_configure(config):
if not config.option.xmlpath and not hasattr(config, 'slaveinput'):
xmlpath = "test_report_" + str(int(time.time())) + ".xml"
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini('junit_suite_name'))
config.pluginmanager.register(config._xml)
Just to keep things more clear, pytest uses argparse and the request.config.option is a argparse.Namespace object. Then, if you would like to simulate a command line option as pytest ... --docker-compose-remove-volumes, you can directly attribute the option docker_compose_remove_volumes to request.config.option (because --docker-compose-remove-volumes is converted to docker_compose_remove_volumes by argparse module).
This examples inverts the default option for --docker-compose-remove-volumes which is false. But allow you to enable it back by providing --keep-containers option to pytest.
def pytest_addoption(parser):
parser.addoption("--keep-containers", action="store_true", default=False,
help="Keeps docker-compose on failure.")
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
is_to_keep_container = request.config.getoption("--keep-containers")
if not is_to_keep_container:
request.config.option.docker_compose_remove_volumes = True