I have a test suite where I need to mark some tests as xfail, but I cannot edit the test functions themselves to add markers. Is it possible to specify that some tests should be xfail from the command line using pytest? Or barring that, at least by adding something to pytest.ini or conftest.py?
I don't know of a command line option to do this, but if you can filter out the respective tests, you may implement pytest_collection_modifyitemsand add an xfail marker to these tests:
conftest.py
names_to_be_xfailed = ("test_1", "test_3")
def pytest_collection_modifyitems(config, items):
for item in items:
if item.name in names_to_be_xfailed:
item.add_marker("xfail")
or, if the name is not unique, you could also filter by item.nodeid.
Related
I am writing unit-tests with pytest-django for a django app. I want to make my tests more performant and doing so is requiring me to keep data saved in database for a certain time and not drop it after one test only. For example:
#pytest.mark.django_db
def test_save():
p1 = MyModel.objects.create(description="some description") # this object has the id 1
p1.save()
#pytest.mark.django_db
def test_modify():
p1 = MyModel.objects.get(id=1)
p1.description = "new description"
What I want is to know how to keep both tests separated but both will use the same test database for some time and drop it thereafter.
I think what you need are pytest fixtures. They allow you yo create objects (stored in database if needed) that will be used during tests. You can have a look at pytest fixtures scope that you can set so that the fixture is not deleted from database and reloading for each test that requires it but instead is created once for a bunch of tests and deleted afterwards.
You should read the documentation of pytest fixtures (https://docs.pytest.org/en/6.2.x/fixture.html) and the section dedicated to fixtures' scope (https://docs.pytest.org/en/6.2.x/fixture.html#scope-sharing-fixtures-across-classes-modules-packages-or-session).
I need to:
Avoid to use save() in tests
Use #pytest.mark.django_db on all tests inside this class
Create a number of trx fixtures (10/20) to act like false data.
import pytest
from ngg.processing import (
elab_data
)
class TestProcessing:
#pytest.mark.django_db
def test_elab_data(self, plan,
obp,
customer,
bac,
col,
trx_1,
trx_2,
...):
plan.save()
obp.save()
customer.save()
bac.save()
col.save()
trx.save()
elab_data(bac, col)
Where the fixtures are simply models like that:
#pytest.fixture
def plan():
plan = Plan(
name = 'test_plan',
status = '1'
)
return plan
I don't find this way really clean. How would you do that?
TL;DR
test.py
import pytest
from ngg.processing import elab_data
#pytest.mark.django_db
class TestProcessing:
def test_elab_data(self, plan, obp, customer, bac, col, trx_1, trx_2):
elab_data(bac, col)
conftest.py
#pytest.fixture(params=[
('test_plan', 1),
('test_plan2', 2),
])
def plan(request, db):
name, status = request.param
return Plan.objects.create(name=name, status=status)
I'm not quite sure if I got it correctly
Avoid to use save() in tests
You may create objects using instance = Model.objects.create() or just put instance.save() in fixtures.
As described at note section here
To access the database in a fixture, it is recommended that the fixture explicitly request one of the db, transactional_db or django_db_reset_sequences fixtures.
and at fixture section here
This fixture will ensure the Django database is set up. Only required for fixtures that want to use the database themselves. A test function should normally use the pytest.mark.django_db() mark to signal it needs the database.
you may want to use db fixture in you records fixtures and keep django_db mark in your test cases.
Use #pytest.mark.django_db on all tests inside this class
To mark whole classes you may use decorator on classes or pytestmark variable as described here.
You may use pytest.mark decorators with classes to apply markers to all of its test methods
To apply marks at the module level, use the pytestmark global variable
Create a number of trx fixtures (10/20) to act like false data.
I didn't quite get what you were trying to do but still would assume that it is one of the following things:
Create multiple objects and pass them as fixtures
In that case you may want to create a fixture that will return generator or list and use whole list instead of multiple fixtures
Test case using different variants of fixture but with one/few at a time
In that case you may want to parametrize your fixture so it will return different objects and test case will run multiple times - one time per one variant
I'm trying to create a test report in JUnit XML format while running pytest on Jenkins.
The default test suite name is "pytest", but I want to change the name depending on parameter values.
For example,
in conftest.py, I have
def pytest_addoption(parser):
parser.addoption("--site", type=str.upper, action="append", default=[],
help="testing site")
And I want to change junit_suite_name option depending on the site value.
I read pytest document but I found that you can change the name in config file like this
[pytest]
junit_suite_name = my_suite
or on command line, -o junit_suite_name.
But this way, the name will always the same for all test cases.
Is there a way to group the suite name conditionally?
Thanks
You can change ini options programmatically via setting or changing values in config.inicfg dict. For example, do it in the custom pytest_configure hookimpl:
def pytest_configure(config):
if config.option.site == 'foo':
config.inicfg['junit_suite_name'] = 'bar'
The suite name in the JUnit report will now be bar when you run pytest --site=foo.
I have a set of pretest checks that I need to run, however one of the checks requires a provider to be initialized.
It would appear that the way around this is to add 'foo_provider' to the fixture arguments to make sure it runs after foo_provider is run. However, I don't want to list 20 fixtures in the args of the pretest fixture.
I've tried using pytest.mark.trylast, I've tried order markers. None of these work properly (or at all). I have tried adding various things to pytest_generate_tests and that tends to screw up the number of tests.
I finally managed it by adding kwargs to the fixture definition and then modifying metafunc._arg2fixturedefs through a function from pytest_generate_tests which feels really bad.
I tried and failed with this as it ran the check too early as well:
#pytest.fixture(params=[pytest.lazy_fixture('foo_provider')], autouse=True)
def pretest(test_skipper, logger, request):
logger().info('running pretest checks')
test_skipper()
Tried and failed at reordering the fixtures like this(called from pytest_generate_tests):
def execute_pretest_last(metafunc):
fixturedef = metafunc._arg2fixturedefs['pretest']
fixture = metafunc._arg2fixturedefs['pretest'][0]
names_order = metafunc.fixturenames
names_order.insert(len(names_order), names_order.pop(names_order.index(fixture.argname)))
funcarg_order = metafunc.funcargnames
funcarg_order.insert(len(funcarg_order),
funcarg_order.pop(funcarg_order.index(fixture.argname)))
The below code works as expected but is there a better way?
def pytest_generate_tests(metafunc):
for fixture in metafunc.fixturenames:
if fixture in PROVIDER_MAP:
parametrize_api(metafunc, fixture), indirect=True)
add_fixture_to_pretest_args(metafunc, fixture)
def add_fixture_to_pretest_args(metafunc, fixture):
pretest_fixtures = list(metafunc._arg2fixturedefs['pretest'][0].argnames)
pretest_fixtures.append(fixture)
metafunc._arg2fixturedefs['pretest'][0].argnames = tuple(pretest_fixtures)
#pytest.fixture(autouse=True)
def pretest(test_skipper, logger, **kwargs):
logger().info('running pretest checks')
test_skipper()
How can I mark a test as skipped in pytest collection process?
What I'm trying to do is have pytest collect all tests and then using the pytest_collection_modifyitems hook mark a certain test as skipped according to a condition I get from a database.
I found a solution which I don't like, I was wondering if maybe there is a better way.
def pytest_collection_modifyitems(items, config):
... # get skip condition from database
for item in items:
if skip_condition == True:
item._request.applymarker(pytest.mark.skipif(True, reason='Put any reason here'))
The problem with this solution is I'm accessing a protected member (_request) of the class..
You were almost there. You just need item.add_marker
def pytest_collection_modifyitems(config, items):
skip = pytest.mark.skip(reason="Skipping this because ...")
for item in items:
if skip_condition: # NB You don't need the == True
item.add_marker(skip)
Note that item has an iterable attribute keywords which contains its markers. So you can use that too.
See pytest documentation on this topic.
You can iterate over testcases (items) and skip them using a common fixture. With 'autouse=True' you shouldn't pass it in each testcase as a parameter:
#pytest.fixture(scope='function', autouse=True)
def my_common_fixture(request):
if True:
pytest.skip('Put any reason here')