I have a lot of tests that basically do the same actions but with different data, so I wanted to implement them using pytest, I managed to do it in a typical junit way like this:
import pytest
import random
d = {1: 'Hi',2: 'How',3: 'Are',4:'You ?'}
def setup_function(function):
print("setUp",flush=True)
def teardown_function(functions):
print("tearDown",flush=True)
#pytest.mark.parametrize("test_input", [1,2,3,4])
def test_one(test_input):
print("Test with data " + str(test_input))
print(d[test_input])
assert True
Which gives me the following output
C:\Temp>pytest test_prueba.py -s
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: C:\Temp, inifile:
collected 4 items
test_prueba.py
setUp
Test with data 1
Hi
.tearDown
setUp
Test with data 2
How
.tearDown
setUp
Test with data 3
Are
.tearDown
setUp
Test with data 4
You ?
.tearDown
========================== 4 passed in 0.03 seconds ===========================
The problem now is that I would perform some actions also in the setup and teardown that I need to access to the test_input value
Is there any elegant solution for this ?
Maybe to achieve this I should use the parametrization or the setup teardown in a different way ?
If that the case can someone put an example of data driven testing with setup and teardown parametrized ?
Thanks !!!
parameterize on a test is more for just specifying raw inputs and expected outputs. If you need access to the parameter in the setup, then it's more part of a fixture than a test.
So you might like to try:
import pytest
d = {"good": "SUCCESS", "bad": "FAIL"}
def thing_that_uses_param(param):
print("param is", repr(param))
yield d.get(param)
print("test done")
#pytest.fixture(params=["good", "bad", "error"])
def parameterized_fixture(request):
param = request.param
yield from thing_that_uses_param(param)
def test_one(parameterized_fixture):
assert parameterized_fixture.lower() == "success"
Which outputs:
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items
a.py::test_one[good] PASSED [ 33%]
a.py::test_one[bad] FAILED [ 66%]
a.py::test_one[error] FAILED [100%]
================================== FAILURES ===================================
________________________________ test_one[bad] ________________________________
parameterized_fixture = 'FAIL'
def test_one(parameterized_fixture):
> assert parameterized_fixture.lower() == "success"
E AssertionError: assert 'fail' == 'success'
E - fail
E + success
a.py:28: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_one[error] _______________________________
parameterized_fixture = None
def test_one(parameterized_fixture):
> assert parameterized_fixture.lower() == "success"
E AttributeError: 'NoneType' object has no attribute 'lower'
a.py:28: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================
However, that requires that you create a parameterized fixture for each set of parameters you might want to use with a fixture.
You could alternatively mix and match the parameterized mark and a fixture that reads those params, but that requires the test to uses specific names for the parameters. It will also need to make sure such names are unique so it won't conflict with any other fixtures trying to do the same thing. For instance:
import pytest
d = {"good": "SUCCESS", "bad": "FAIL"}
def thing_that_uses_param(param):
print("param is", repr(param))
yield d.get(param)
print("test done")
#pytest.fixture
def my_fixture(request):
if "my_fixture_param" not in request.funcargnames:
raise ValueError("could use a default instead here...")
param = request.getfuncargvalue("my_fixture_param")
yield from thing_that_uses_param(param)
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
assert my_fixture.lower() == "success"
Which outputs:
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items
a.py::test_two[good] PASSED [ 33%]
a.py::test_two[bad] FAILED [ 66%]
a.py::test_two[error] FAILED [100%]
================================== FAILURES ===================================
________________________________ test_two[bad] ________________________________
my_fixture = 'FAIL', my_fixture_param = 'bad'
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
> assert my_fixture.lower() == "success"
E AssertionError: assert 'fail' == 'success'
E - fail
E + success
a.py:25: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_two[error] _______________________________
my_fixture = None, my_fixture_param = 'error'
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
> assert my_fixture.lower() == "success"
E AttributeError: 'NoneType' object has no attribute 'lower'
a.py:25: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================
I think that what you are looking for is yield fixtures,
you can make an auto_use fixture the run something before every test and after
and you can access all the test metadata (marks, parameters and etc)
you can read it
here
and the access to parameters is via function argument called request
IMO, set_up and tear_down should not access test_input values. If you want it to be that way, then probably there is some problem in your test logic.
set_up and tear_down must be independent of values used by test. However, you may use another fixture to get the task done.
I have a lot of tests that basically do the same actions but with different data
In addition to Dunes' answer relying solely on pytest, this part of your question makes me think that pytest-cases could be useful to you, too. Especially if some test data should be parametrized while other do not.
See this other post for an example, and also the documentation of course. I'm the author by the way ;)
Related
I am trying to parse the FAILURES section from the terminal output of a Pytest session, line by line, identifying the testname and test filename for each test, which I then want to append together to form a "fully qualified test name" (FQTN), e.g. tests/test_1.py::test_3_fails. I also want to get and save the traceback info (which is what is between the testname and the test filename).
The parsing part is straightforward and I already have working regex's that match the test name and the test filename, and I can extract the traceback info based on that. My issue with the FQTNs is algorithmic - I can't seem to figure out the overall logic to identify a testname, then the test's filename, wihch occurs on a later line. I need to accommodate for not only the tests that are in the middle of the FAILURES section, but also the first test and the last test of the FAILURES section.
Here's an example. This is the output section for all failures during a test run, along with some of the terminal output that comes right before FAILURES, and right after.
.
.
.
============================================== ERRORS ===============================================
__________________________________ ERROR at setup of test_c_error ___________________________________
#pytest.fixture
def error_fixture():
> assert 0
E assert 0
tests/test_2.py:19: AssertionError
============================================= FAILURES ==============================================
___________________________________________ test_3_fails ____________________________________________
log_testname = None
def test_3_fails(log_testname):
> assert 0
E assert 0
tests/test_1.py:98: AssertionError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
______________________________________ test_8_causes_a_warning ______________________________________
log_testname = None
def test_8_causes_a_warning(log_testname):
> assert api_v1() == 1
E TypeError: api_v1() missing 1 required positional argument: 'log_testname'
tests/test_1.py:127: TypeError
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
---------------------------------------- Captured log setup -----------------------------------------
INFO root:test_1.py:68 Running test tests.test_1...
INFO root:test_1.py:69 Setting test up...
INFO root:test_1.py:70 Executing test...
INFO root:test_1.py:72 Tearing test down...
___________________________ test_16_fail_compare_dicts_for_pytest_icdiff ____________________________
def test_16_fail_compare_dicts_for_pytest_icdiff():
listofStrings = ["Hello", "hi", "there", "at", "this"]
listofInts = [7, 10, 45, 23, 77]
assert len(listofStrings) == len(listofInts)
> assert listofStrings == listofInts
E AssertionError: assert ['Hello', 'hi... 'at', 'this'] == [7, 10, 45, 23, 77]
E At index 0 diff: 'Hello' != 7
E Full diff:
E - [7, 10, 45, 23, 77]
E + ['Hello', 'hi', 'there', 'at', 'this']
tests/test_1.py:210: AssertionError
____________________________________________ test_b_fail ____________________________________________
def test_b_fail():
> assert 0
E assert 0
tests/test_2.py:27: AssertionError
============================================== PASSES ===============================================
___________________________________________ test_4_passes ___________________________________________
--------------------------------------- Captured stdout setup ---------------------------------------
Running test tests.test_1...
Running test tests.test_1...
Setting test up...
Setting test up...
Executing test...
Executing test...
Tearing test down...
Tearing test down...
.
.
.
Is anyone here good with algorithms, maybe some pseudo code that shows an overall way of getting each testname and its associated test filename?
Here is my proposal to get the rendered summary for a test case report. Use this stub as a rough idea - you might want to iterate through the reports and dump the rendered summaries first, then do the curses magic to display the collected data.
Some tests to play with:
import pytest
def test_1():
assert False
def test_2():
raise RuntimeError('call error')
#pytest.fixture
def f():
raise RuntimeError('setup error')
def test_3(f):
assert True
#pytest.fixture
def g():
yield
raise RuntimeError('teardown error')
def test_4(g):
assert True
Dummy plugin example that renders the summary for test_3 case. Put the snippet in conftest.py:
def pytest_unconfigure(config):
# example: get rendered output for test case `test_spam.py::test_3`
# get the reporter
reporter = config.pluginmanager.getplugin('terminalreporter')
# create a buffer to dump reporter output to
import io
buf = io.StringIO()
# fake tty or pytest will not colorize the output
buf.isatty = lambda: True
# replace writer in reporter to dump the output in buffer instead of stdout
from _pytest.config import create_terminal_writer
# I want to use the reporter again later to dump the rendered output,
# so I store the original writer here (you probably don't need it)
original_writer = reporter._tw
writer = create_terminal_writer(config, file=buf)
# replace the writer
reporter._tw = writer
# find the report for `test_spam.py::test_3` (we already know it will be an error report)
errors = reporter.stats['error']
test_3_report = next(
report for report in errors if report.nodeid == 'test_spam.py::test_3'
)
# dump the summary along with the stack trace for the report of `test_spam.py::test_3`
reporter._outrep_summary(test_3_report)
# print dumped contents
# you probably don't need this - this is just for demo purposes
# restore the original writer to write to stdout again
reporter._tw = original_writer
reporter.section('My own section', sep='>')
reporter.write(buf.getvalue())
reporter.write_sep('<')
A pytest run now yields an additional section
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My own section >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
#pytest.fixture
def f():
> raise RuntimeError('setup error')
E RuntimeError: setup error
test_spam.py:14: RuntimeError
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
with the stack trace rendered same way pytest does in the ERRORS summary section. You can play with outcomes for different test cases if you want - replace the reporter.stats section if necessary (errors or failed, or even passed - although the summary should be empty for passed tests) and amend the test case nodeid.
I'm attempting to write a test fixture based on randomly generated data. This randomly generated data needs to be able to accept a seed so that we can generate the same data on two different computers at the same time.
I'm using the pytest parse.addoption fixture (I think it's a fixture) to add this ability.
My core issue is that I'd like to be able to parameterize a randomly generated list that uses a fixture as an argument.
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(10))
#fixture(scope=session)
def seed(pytestconfig):
return pytestconfig.getoption("seed")
#fixture(scope=session)
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize(group_item=test_context["group_items"])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
Leads to this result.
% pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{Home}/tests, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:12: in <module>
???
E TypeError: 'function' object is not subscriptable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not subscriptable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.18s ============================================================================================================================================================================
I think I might be able work around this issue by making an empty test that leaks the test_context up to the global scope but that feels really really brittle. I'm looking for another method to still be able to
Use the seed fixture to generate data
Generate one test per element in the generated list
Not depend on the order in which the tests are run.
Edit
Here's an example of this not working with straight pytest
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture
def seed():
return 1
#fixture
def test_context(seed):
return [seed, 'a', 'test', 'list']
#pytest.fixture(params=test_context)
def example_fixture(request):
return request.param
def test_reconciliation(example_fixture) -> None:
print(example_fixture)
assert False
pytest test.py
========================================================================================================================================================================== test session starts ===========================================================================================================================================================================
platform darwin -- Python 3.8.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /Users/{HOME}/tests/integration, configfile: pytest.ini
plugins: datadir-1.3.1, celery-4.4.7, anyio-3.4.0, cases-3.6.11
collected 0 items / 1 error
================================================================================================================================================================================= ERRORS =================================================================================================================================================================================
________________________________________________________________________________________________________________________________________________________________________ ERROR collecting test.py ________________________________________________________________________________________________________________________________________________________________________
test.py:14: in <module>
???
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1327: in fixture
fixture_marker = FixtureFunctionMarker(
<attrs generated init _pytest.fixtures.FixtureFunctionMarker>:5: in __init__
_inst_dict['params'] = __attr_converter_params(params)
../../../../../.venvs/data_platform/lib/python3.8/site-packages/_pytest/fixtures.py:1159: in _params_converter
return tuple(params) if params is not None else None
E TypeError: 'function' object is not iterable
======================================================================================================================================================================== short test summary info =========================================================================================================================================================================
ERROR test.py - TypeError: 'function' object is not iterable
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================================================================================================================ 1 error in 0.23s ======================================================================================================================================================================
I tried your code with testfile and conftest.py
conftest.py
import pytest
from secrets import randbelow
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
# If you add a breakpoint() here it'll never be hit.
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# This line throws an exception since seed was never added.
return pytestconfig.getoption("seed")
myso_test.py
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
#fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert True
Test Run:
PS C:\Users\AB45365\PycharmProjects\SO> pytest .\myso_test.py -s -v --seed=10
============================================================== test session starts ==============================================================
platform win32 -- Python 3.9.2, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- c:\users\ab45365\appdata\local\programs\python\python39\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\AB45365\PycharmProjects\SO
plugins: cases-3.6.11, lazy-fixture-0.6.3
collected 1 item
myso_test.py::test_example[group_item-test_context] PASSED
To complement Devang Sanghani's answer : as of pytest 7.1, pytest_addoption is a pytest plugin hook. So, as for all other plugin hooks, it can only be present in plugin files or in conftest.py.
See the note in https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.hookspec.pytest_addoption :
This function should be implemented only in plugins or conftest.py
files situated at the tests root directory due to how pytest discovers
plugins during startup.
This issue is therefore not related to pytest-cases.
After doing some more digging I ran into this documentation around pytest-cases
from secrets import randbelow
import pytest
from pytest_cases import parametrize_with_cases, fixture, parametrize
def pytest_addoption(parser):
parser.addoption("--seed", action="store", default=randbelow(1))
#fixture(scope="session")
def seed(pytestconfig):
# return pytestconfig.getoption("seed")
return 1
#pytest.fixture(scope="session")
def test_context(seed):
# In my actual tests these are randomly generated from the seed.
# each element here is actually a dictionary but I'm showing strings
# for simplicity of example.
return ['a', 'test', 'list']
#parametrize("group_item", [test_context])
def case_group_item(group_item: str):
return group_item, "expected_result_goes_here"
#parametrize_with_cases("sql_statement, expected_result", cases='.')
def test_example(
sql_statement: str,
expected_result: int) -> None:
assert False
This unfortunately ran me into a new problem. Looks like pytest-cases doesn't currently call pytest_addoption during the fixture execution step rihgt now. I created this ticket to cover this case but this does effectively solve my original question even if it has a caveat.
I'm parameterizing pytest tests with variables defined in an external YAML file using the pytest_generate_tests hook. The name of the variable file is specified on the pytest command line (--params_file). Only some of the test functions within a module are parameterized and require the variables in this file. Thus, the command line option defining the variables is an optional argument. If the optional argument is omitted from the command line, then I want pytest to just "skip" those test functions which need the external parameterized variables and just run the "other" tests which are not parameterized. The problem is, if the command line option is omitted, pytest is skipping ALL of the test functions, not just the test functions that require the parameters.
Here is the test module file:
def test_network_validate_1(logger, device_connections,):
### Test code omitted.....
def test_lsp_throttle_timers(params_file, logger, device_connections):
### Test code omitted.....
def test_network_validate_2(logger, device_connections,):
### Test code omitted.....
pytest_generate_tests hook in conftest.py:
# Note, I tried scope at function level as well but that did not help
#pytest.fixture(scope='session')
def params_file(request):
pass
def pytest_generate_tests(metafunc):
### Get Pytest rootdir
rootdir = metafunc.config.rootdir
print(f"*********** Test Function: {metafunc.function.__name__}")
if "params_file" in metafunc.fixturenames:
print("*********** Hello Silver ****************")
if metafunc.config.getoption("--params_file"):
#################################################################
# Params file now located as a separate command line argument for
# greater flexibility
#################################################################
params_file = metafunc.config.getoption("--params_file")
params_doc = dnet_generic.open_yaml_file(Path(rootdir, params_file),
loader_type=yaml.Loader)
test_module = metafunc.module.__name__
test_function = metafunc.function.__name__
names,values = dnet_generic.get_test_parameters(test_module,
test_function,
params_doc,)
metafunc.parametrize(names, values )
else:
pytest.skip("This test requires the params_file argument")
When the params_file option is present, everything works fine:
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --params_file common/topoA_params.yml --collect-only
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items *********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Test Function: test_network_validate_2
collected 3 items
<Package '/home/as2863/pythonProjects/p1-automation/isis'>
<Module 'test_isis_lsp_throttle.py'>
<Function 'test_network_validate_1'>
<Function 'test_lsp_throttle_timers'>
<Function 'test_network_validate_2'>
================================================================================ no tests ran in 0.02 seconds =================================================================================
When the params_file option is ommitted, you can see that no tests are run and the print statement shows it does not even try to run pytest_generate_tests on "test_network_validate_2"
pytest isis/test_isis_lsp_throttle.py --testinfo topoA_r28.yml --ulog -s --collect-only ===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.4, pytest-3.7.0, py-1.8.0, pluggy-0.13.0
rootdir: /home/as2863/pythonProjects/p1-automation, inifile: pytest.ini
plugins: csv-2.0.1, check-0.3.5, pylama-7.6.6, dependency-0.4.0, instafail-0.4.0, ordering-0.6, repeat-0.7.0, reportportal-5.0.3
collecting 0 items
*********** Test Function: test_network_validate_1
*********** Test Function: test_lsp_throttle_timers
*********** Hello Silver ****************
collected 0 items / 1 skipped
================================================================================== 1 skipped in 0.11 seconds ==================================================================================
As has been found in the discussion in the comments, you cannot use pytest.skip in pytest_generate_tests, because it will work on module scope. To skip the concrete test, you can do something like this:
#pytest.fixture
def skip_test():
pytest.skip('Some reason')
def pytest_generate_tests(metafunc):
if "params_file" in metafunc.fixturenames:
if metafunc.config.getoption("--params_file"):
...
metafunc.parametrize(names, values )
else:
metafunc.fixturenames.insert(0, 'skip_test')
E.g. you introduce a fixture that will skip the concrete test, and add this fixture to the test. Make sure to insert it as the first fixture, so no other fixtures will be executed.
While MrBean Bremen's answer may work, according to the pytest authors dynamically altering the fixture list is not something they really want to support. This approach, however, is a bit more supported.
# This is "auto used", but doesn't always skip the test unless the test parameters require it
#pytest.fixture(autouse=True)
def skip_test(request):
# For some reason this is only conditionally set if a param is passed
# https://github.com/pytest-dev/pytest/blob/791b51d0faea365aa9474bb83f9cd964fe265c21/src/_pytest/fixtures.py#L762
if not hasattr(request, 'param'):
return
pytest.skip(f"Test skipped: {request.param}")
And in your test module:
def _add_flag_parameter(metafunc: pytest.Metafunc, name: str):
if name not in metafunc.fixturenames:
return
flag_value = metafunc.config.getoption(name)
if flag_value:
metafunc.parametrize(name, [flag_value])
else:
metafunc.parametrize("skip_test", ["Missing flag '{name}'"], indirect=True)
def pytest_generate_tests(metafunc: pytest.Metafunc):
_add_flag_parameter(metafunc, "params_file")
When I call pytest.main(...) in Python, it will display the unit test messages in the output window, such as
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: xxx, inifile: pytest.ini
plugins: cov-2.5.1, hypothesis-3.38.5
collected 1 item
..\test_example.py F [100%]
================================== FAILURES ===================================
_______________________________ test_something ________________________________
def test_something():
> assert 1 == 0
E assert 1 == 0
..\test_example.py:5: AssertionError
========================== 1 failed in 0.60 seconds ===========================
My question is simply that how can I get the message above into a string object. The documentation doesn't say anything about this. And the what pytest.main() returns is only an integer that represents the error code.
https://docs.pytest.org/en/latest/usage.html#calling-pytest-from-python-code
I don't know pytest but you can try to redirect the standard output to a file. Something like this:
import sys
sys.stdout = open("log.txt", "w")
After this you can read the file to get the strings. What do you want to do with the strings?
I try to make a simple test with assert statement like this
def sqr(x):
return x**2 + 1
def test_sqr():
assert kvadrat(2) == 4
! pytest
and it returns
If anyone has any ideas what might be wrong.
For pytest to find the tests these have to be in a file in the current directory or a sub-directory and the filename has to begin with test (although that's probably customizable).
So if you use (inside a Jupyter notebook):
%%file test_sth.py
def sqr(x):
return x**2 + 1
def test_sqr():
assert kvadrat(2) == 4
That creates a file named test_sth.py with the given contents.
Then run ! pytest and it should work (well, fail):
============================= test session starts =============================
platform win32 -- Python 3.6.3, pytest-3.3.0, py-1.5.2, pluggy-0.6.0
rootdir: D:\JupyterNotebooks, inifile:
plugins: hypothesis-3.38.5
collected 1 item
test_sth.py F [100%]
================================== FAILURES ===================================
__________________________________ test_sqr ___________________________________
def test_sqr():
> assert kvadrat(2) == 4
E NameError: name 'kvadrat' is not defined
test_sth.py:5: NameError
========================== 1 failed in 0.12 seconds ===========================