Mark test as skipped from pytest_collection_modifyitems - python

How can I mark a test as skipped in pytest collection process?
What I'm trying to do is have pytest collect all tests and then using the pytest_collection_modifyitems hook mark a certain test as skipped according to a condition I get from a database.
I found a solution which I don't like, I was wondering if maybe there is a better way.
def pytest_collection_modifyitems(items, config):
... # get skip condition from database
for item in items:
if skip_condition == True:
item._request.applymarker(pytest.mark.skipif(True, reason='Put any reason here'))
The problem with this solution is I'm accessing a protected member (_request) of the class..

You were almost there. You just need item.add_marker
def pytest_collection_modifyitems(config, items):
skip = pytest.mark.skip(reason="Skipping this because ...")
for item in items:
if skip_condition: # NB You don't need the == True
item.add_marker(skip)
Note that item has an iterable attribute keywords which contains its markers. So you can use that too.
See pytest documentation on this topic.

You can iterate over testcases (items) and skip them using a common fixture. With 'autouse=True' you shouldn't pass it in each testcase as a parameter:
#pytest.fixture(scope='function', autouse=True)
def my_common_fixture(request):
if True:
pytest.skip('Put any reason here')

Related

How to parametrize prepared data

I need to prepare an entity with suitable child entities. I need to specify the type of entities that will be stored in the prepared parent entity. Like this:
#pytest.fixture(element_types)
def entry_type_id(element_types)
elements = [resolve_elements_create(data=element_data(element_type)) for element_type in element_types]
entry_type_id = resolve_entry_type_create(elements)
return entry_type_id
def test_something(entry_type_id([ElementType1, ElementType2])):
...
I can't create one fixture for each use case, because there are so many combinations I need. Is there any way I can pass parameters to the ficture to customize the prepared entity?
I don't fully understand what you end point is but according to your comment I think you should create a test class so you can create the elements and then delete them, since you want to test the creation + deletion of the entries
#pytest.fixture(scope="class")
def entry_type(request)
element = resolve_elements_create(data=element_data(request.param))
# This should return 0 if Error during creation
return resolve_entry_type_create(element)
following by the test it self
#pytest.mark.parametrize("entry_type", [ElementType1, ElementType2], indirect=True)
class TestEntries:
def test_create_entry(entry_type):
assert entry_type
def test_delete_entry(entry_type):
assert delete_entry(entry_type)
This is more of a psuedo code it will need some changes, but in most cases the use of fixtures should be prefered over functions in pytest.

xfail pytest tests from the command line

I have a test suite where I need to mark some tests as xfail, but I cannot edit the test functions themselves to add markers. Is it possible to specify that some tests should be xfail from the command line using pytest? Or barring that, at least by adding something to pytest.ini or conftest.py?
I don't know of a command line option to do this, but if you can filter out the respective tests, you may implement pytest_collection_modifyitemsand add an xfail marker to these tests:
conftest.py
names_to_be_xfailed = ("test_1", "test_3")
def pytest_collection_modifyitems(config, items):
for item in items:
if item.name in names_to_be_xfailed:
item.add_marker("xfail")
or, if the name is not unique, you could also filter by item.nodeid.

pytest mark django db: avoid to save fixture in tests

I need to:
Avoid to use save() in tests
Use #pytest.mark.django_db on all tests inside this class
Create a number of trx fixtures (10/20) to act like false data.
import pytest
from ngg.processing import (
elab_data
)
class TestProcessing:
#pytest.mark.django_db
def test_elab_data(self, plan,
obp,
customer,
bac,
col,
trx_1,
trx_2,
...):
plan.save()
obp.save()
customer.save()
bac.save()
col.save()
trx.save()
elab_data(bac, col)
Where the fixtures are simply models like that:
#pytest.fixture
def plan():
plan = Plan(
name = 'test_plan',
status = '1'
)
return plan
I don't find this way really clean. How would you do that?
TL;DR
test.py
import pytest
from ngg.processing import elab_data
#pytest.mark.django_db
class TestProcessing:
def test_elab_data(self, plan, obp, customer, bac, col, trx_1, trx_2):
elab_data(bac, col)
conftest.py
#pytest.fixture(params=[
('test_plan', 1),
('test_plan2', 2),
])
def plan(request, db):
name, status = request.param
return Plan.objects.create(name=name, status=status)
I'm not quite sure if I got it correctly
Avoid to use save() in tests
You may create objects using instance = Model.objects.create() or just put instance.save() in fixtures.
As described at note section here
To access the database in a fixture, it is recommended that the fixture explicitly request one of the db, transactional_db or django_db_reset_sequences fixtures.
and at fixture section here
This fixture will ensure the Django database is set up. Only required for fixtures that want to use the database themselves. A test function should normally use the pytest.mark.django_db() mark to signal it needs the database.
you may want to use db fixture in you records fixtures and keep django_db mark in your test cases.
Use #pytest.mark.django_db on all tests inside this class
To mark whole classes you may use decorator on classes or pytestmark variable as described here.
You may use pytest.mark decorators with classes to apply markers to all of its test methods
To apply marks at the module level, use the pytestmark global variable
Create a number of trx fixtures (10/20) to act like false data.
I didn't quite get what you were trying to do but still would assume that it is one of the following things:
Create multiple objects and pass them as fixtures
In that case you may want to create a fixture that will return generator or list and use whole list instead of multiple fixtures
Test case using different variants of fixture but with one/few at a time
In that case you may want to parametrize your fixture so it will return different objects and test case will run multiple times - one time per one variant

Pre-test tasks using pytest markers?

I've got a Python application using pytest. For several of my tests, there are API calls to Elasticsearch (using elasticsearch-dsl-py) that slow down my tests that I'd like to:
prevent unless a Pytest marker is used.
If a marker is used, I would want that marker to execute some code before the test runs. Just like how a fixture would work if you used yield.
This is mostly inspired by pytest-django where you have to use the django_db marker in order to make a conn to the database (but they throw an error if you try to connect to the DB, whereas I just don't want a call in the first place, that's all).
For example:
def test_unintentionally_using_es():
"""I don't want a call going to Elasticsearch. But they just happen. Is there a way to "mock" the call? Or even just prevent the call from happening?"""
#pytest.mark.elastic
def test_intentionally_using_es():
"""I would like for this marker to perform some tasks beforehand (i.e. clear the indices)"""
# To replicate that second test, I currently use a fixture:
#pytest.fixture
def elastic():
# Pre-test tasks
yield something
I think that's a use-case for markers right? Mostly inspired by pytest-django.
Your initial approach with having a combination of a fixture and a custom marker is the correct one; in the code below, I took the code from your question and filled in the gaps.
Suppose we have some dummy function to test that uses the official elasticsearch client:
# lib.py
from datetime import datetime
from elasticsearch import Elasticsearch
def f():
es = Elasticsearch()
es.indices.create(index='my-index', ignore=400)
return es.index(
index="my-index",
id=42,
body={"any": "data", "timestamp": datetime.now()},
)
We add two tests, one is not marked with elastic and should operate on fake client, the other one is marked and needs access to real client:
# test_lib.py
from lib import f
def test_fake():
resp = f()
assert resp["_id"] == "42"
#pytest.mark.elastic
def test_real():
resp = f()
assert resp["_id"] == "42"
Now let's write the elastic() fixture that will mock the Elasticsearch class depending on whether the elastic marker was set:
from unittest.mock import MagicMock, patch
import pytest
#pytest.fixture(autouse=True)
def elastic(request):
should_mock = request.node.get_closest_marker("elastic") is None
if should_mock:
patcher = patch('lib.Elasticsearch')
fake_es = patcher.start()
# this is just a mock example
fake_es.return_value.index.return_value.__getitem__.return_value = "42"
else:
... # e.g. start the real server here etc
yield
if should_mock:
patcher.stop()
Notice the usage of autouse=True: the fixture will be executed on each test invocation, but only do the patching if the test is not marked. This presence of the marker is checked via request.node.get_closest_marker("elastic") is None. If you run both tests now, test_fake will pass because elastic mocks the Elasticsearch.index() response, while test_real will fail, assuming you don't have a server running on port 9200.

Is there a better way to ensure that a fixture runs after another without listing those fixtures in the arguments?

I have a set of pretest checks that I need to run, however one of the checks requires a provider to be initialized.
It would appear that the way around this is to add 'foo_provider' to the fixture arguments to make sure it runs after foo_provider is run. However, I don't want to list 20 fixtures in the args of the pretest fixture.
I've tried using pytest.mark.trylast, I've tried order markers. None of these work properly (or at all). I have tried adding various things to pytest_generate_tests and that tends to screw up the number of tests.
I finally managed it by adding kwargs to the fixture definition and then modifying metafunc._arg2fixturedefs through a function from pytest_generate_tests which feels really bad.
I tried and failed with this as it ran the check too early as well:
#pytest.fixture(params=[pytest.lazy_fixture('foo_provider')], autouse=True)
def pretest(test_skipper, logger, request):
logger().info('running pretest checks')
test_skipper()
Tried and failed at reordering the fixtures like this(called from pytest_generate_tests):
def execute_pretest_last(metafunc):
fixturedef = metafunc._arg2fixturedefs['pretest']
fixture = metafunc._arg2fixturedefs['pretest'][0]
names_order = metafunc.fixturenames
names_order.insert(len(names_order), names_order.pop(names_order.index(fixture.argname)))
funcarg_order = metafunc.funcargnames
funcarg_order.insert(len(funcarg_order),
funcarg_order.pop(funcarg_order.index(fixture.argname)))
The below code works as expected but is there a better way?
def pytest_generate_tests(metafunc):
for fixture in metafunc.fixturenames:
if fixture in PROVIDER_MAP:
parametrize_api(metafunc, fixture), indirect=True)
add_fixture_to_pretest_args(metafunc, fixture)
def add_fixture_to_pretest_args(metafunc, fixture):
pretest_fixtures = list(metafunc._arg2fixturedefs['pretest'][0].argnames)
pretest_fixtures.append(fixture)
metafunc._arg2fixturedefs['pretest'][0].argnames = tuple(pretest_fixtures)
#pytest.fixture(autouse=True)
def pretest(test_skipper, logger, **kwargs):
logger().info('running pretest checks')
test_skipper()

Categories