Set up and tear down functions for pytest bdd - python

I'm trying to do setup and teardown modules using pytest-bdd.
I know with behave you can do a environment.py file with before_all and after_all modules. How do I do this in pytest-bdd
I have looked into "classic xunit-style setup" plugin and it didn't work when I tried it. (I know thats more related to py-test and not py-test bdd).

You could just declare a pytest.fixture with autouse=true and whatever scope you want. You can then use the request fixture to specify the teardown. E.g.:
#pytest.fixture(autouse=True, scope='module')
def setup(request):
# Setup code
def fin():
# Teardown code
request.addfinalizer(fin)

A simple-ish approach for me is to use a trivial fixture.
# This declaration can go in the project's confest.py:
#pytest.fixture
def context():
class Context(object):
pass
return Context()
#given('some given step')
def some_when_step(context):
context.uut = ...
#when('some when step')
def some_when_step(context):
context.result = context.uut...
Note: confest.py allows you to share fixtures between codes, and putting everything in one file gives me a static analysis warning anyway.

"pytest supports execution of fixture specific finalization code when the fixture goes out of scope. By using a yield statement instead of return, all the code after the yield statement serves as the teardown code:"
See: https://docs.pytest.org/en/latest/fixture.html
eg.
#pytest.fixture(scope="module")
def smtp_connection():
smtp_connection = smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
yield smtp_connection # provide the fixture value
print("teardown smtp")
smtp_connection.close()

Related

How to do a teardown of pytest intermediate results on test fail?

There is enough information on how to setup and share pytest fixtures between tests.
But what if a test would create some remote resources and then fail? How to make pytest to cleanup those resources which haven't existed as fixtures at test beginning?
You can keep a variable in class level
#pytest.mark.usefixtures("run_for_test")
class TestExample:
__some_resource = None
#pytest.fixture
def run_for_test(self):
set_up()
yield
if self.__some_resource:
self.__some_resource.cleanup()
def test_example(self):
self.__some_resource = SomeResource()

Using a command-line option in a pytest skip-if condition

Long story short, I want to be able to skip some tests if the session is being run against our production API. The environment that the tests are run against is set with a command-line option.
I came across the idea of using the pytest_namespace to track global variables, so I set that up in my conftest.py file.
def pytest_namespace():
return {'global_env': ''}
I take in the command line option and set various API urls (from a config.ini file) in a fixture in conftest.py.
#pytest.fixture(scope='session', autouse=True)
def configInfo(pytestconfig):
global data
environment = pytestconfig.getoption('--ENV')
print(environment)
environment = str.lower(environment)
pytest.global_env = environment
config = configparser.ConfigParser()
config.read('config.ini') # local config file
configData = config['QA-CONFIG']
if environment == 'qa':
configData = config['QA-CONFIG']
if environment == 'prod':
configData = config['PROD-CONFIG']
(...)
Then I've got the test I want to skip, and it's decorated like so:
#pytest.mark.skipif(pytest.global_env in 'prod',
reason="feature not in Prod yet")
However, whenever I run the tests against prod, they don't get skipped. I did some fiddling around, and found that:
a) the global_env variable is accessible through another fixture
#pytest.fixture(scope="session", autouse=True)
def mod_header(request):
log.info('\n-----\n| '+pytest.global_env+' |\n-----\n')
displays correctly in my logs
b) the global_env variable is accessible in a test, correctly logging the env.
c) pytest_namespace is deprecated
So, I'm assuming this has to do with when the skipif accesses that global_env vs. when the fixtures do in the test session. I also find it non-ideal to use a deprecated functionality.
My question is:
how do I get a value from the pytest command line option into a skipif?
Is there a better way to be trying this than the pytest_namespace?
Looks like true way to Control skipping of tests according to command line option is mark tests as skip dynamically:
add option using pytest_addoption hook like this:
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
Use pytest_collection_modifyitems hook to add marker like this:
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
Add mark to you test:
#pytest.mark.slow
def test_func_slow():
pass
If you want to use the data from the CLI in a test, for example, it`s credentials, enough to specify a skip option when retrieving them from the pytestconfig:
add option using pytest_addoption hook like this:
def pytest_addoption(parser):
parser.addoption(
"--credentials",
action="store",
default=None,
help="credentials to ..."
)
use skip option when get it from pytestconfig
#pytest.fixture(scope="session")
def super_secret_fixture(pytestconfig):
credentials = pytestconfig.getoption('--credentials', skip=True)
...
use fixture as usual in you test:
def test_with_fixture(super_secret_fixture):
...
In this case you will got something like this it you not send --credentials option to CLI:
Skipped: no 'credentials' option found
It is better to use _pytest.config.get_config instead of deprecated pytest.config If you still wont to use pytest.mark.skipif like this:
#pytest.mark.skipif(not _pytest.config.get_config().getoption('--credentials'), reason="--credentials was not specified")
The problem with putting global code in fixtures is that markers are evaluated before fixtures, so when skipif is evaluated, configInfo didn't run yet and pytest.global_env will be empty. I'd suggest to move the configuration code from the fixture to pytest_configure hook:
# conftest.py
import configparser
import pytest
def pytest_addoption(parser):
parser.addoption('--ENV')
def pytest_configure(config):
environment = config.getoption('--ENV')
pytest.global_env = environment
...
The configuration hook is guaranteed to execute before the tests are collected and the markers are evaluated.
Is there a better way to be trying this than the pytest_namespace?
Some ways I know of:
Simply assign a module variable in pytest_configure (pytest.foo = 'bar', like I did in the example above).
Use the config object as it is shared throughout the test session:
def pytest_configure(config):
config.foo = 'bar'
#pytest.fixture
def somefixture(pytestconfig):
assert pytestconfig.foo == 'bar'
def test_foo(pytestconfig):
assert pytestconfig.foo == 'bar'
Outside of the fixtures/tests, you can access the config via pytest.config, for example:
#pytest.mark.skipif(pytest.config.foo == 'bar', reason='foo is bar')
def test_baz():
...
Use caching; this has an additional feature of persisting data between the test runs:
def pytest_configure(config):
config.cache.set('foo', 'bar')
#pytest.fixture
def somefixture(pytestconfig):
assert pytestconfig.cache.get('foo', None)
def test_foo(pytestconfig):
assert pytestconfig.cache.get('foo', None)
#pytest.mark.skipif(pytest.config.cache.get('foo', None) == 'bar', reason='foo is bar')
def test_baz():
assert True
When using 1. or 2., make sure you don't unintentionally overwrite pytest stuff with your own data; prefixing your own variables with a unique name is a good idea. When using caching, you don't have this problem.

Pytest fixture inside class to execute only once

I need to create a class that uses a fixture from conftest.py, and this class has a fixture that I need to use only once per test session.
I have two test classes that depend on this class that has a fixture.
Sample code in test_app.py:
#pytest.mark.usefixtures("driver_get")
class TestBase:
#pytest.fixture(scope="module", autouse=True)
def set_up(self):
# set up code.
# web page will load if this fixture is called
# uses self.driver, where driver was set in driver_get
class TestOne(TestBase):
def test_1(self):
# test code
# uses self.driver also in the test
class TestTwo(TestBase):
def test_2(self):
# test code
# uses self.driver also in the test
Sample code in conftest.py (follows https://dzone.com/articles/improve-your-selenium-webdriver-tests-with-pytest):
#pytest.fixture(scope="session")
def driver_get(request):
from selenium import webdriver
web_driver = webdriver.Chrome()
session = request.node
for item in session.items:
cls = item.getparent(pytest.Class)
setattr(cls.obj,"driver",web_driver)
yield
web_driver.close()
As you can see, conftest.py sets the driver as a class attribute, which is why I am applying the driver_get fixture to class TestBase, since I need to use the driver inside the class.
The problem is, once TestOne finishes, the web page will load again, and execute TestTwo, which means that the fixture set_up was executed again, which is not what I want (since I set set_up scope to module, so it should only really happen once).
I know there is a similar question asked here (py.test method to be executed only once per run), but the asker didn't have the constraint of needing TestBase to have a fixture applied to it as well.
I have thought of putting the fixture inside conftest.py, but I am not sure if it is possible given my constraint of the fixture needing to be inside a class, and executed only once.
Any help will be appreciated. Thank you!
In your code, your set_up fixture which is module scope is called before your TestBase resolves the driver_get fixture. Because of this, at set_up trying self.driver will raise an AttributeError: object has no attribute 'driver'.
One quick way to fix this in your example code is to refer to the driver_get fixture in your module set_up fixture like so:
class TestBase:
#pytest.fixture(scope="module", autouse=True)
def set_up(self, driver_get): # <-------------
self.driver.get("https://www.yahoo.com")
Another way to refer to fixtures is just by including the fixture name as an argument.
Personally I am not a huge fan of the approach you copied from that blog of setting a class attribute on the request node. You'd get an IDE warning about referring to self.driver. To me, it would be clearer to yield the driver out of driver_get and then within the test classes either set it to self in a setup style fixture or use it directly. Similar to something below.
#pytest.fixture(scope="session")
def driver_get():
from selenium import webdriver
web_driver = webdriver.Chrome()
yield web_driver
web_driver.close()
class TestClass:
#pytest.fixture(autouse=True)
def setup(self, driver_get):
self.driver = driver_get
def test_something(self):
self.driver.get("https://www.google.com")
But depending on what else needs to be setup, if you want to control it happening just once per session or module you would need to modify the approach a bit.

How to dynamically add new fixtures to a test based on the fixture signature of a test

So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"

pytest exception none type object is not callable

In test1.py I have below code
#pytest.fixture(scope="session")
def moduleSetup(request):
module_setup = Module_Setup()
request.addfinalizer(module_setup.teardown())
return module_setup
def test_1(moduleSetup):
print moduleSetup
print '...'
#assert 0
# def test_2(moduleSetup):
# print moduleSetup
# print '...'
# #assert 0
And in conftest.py I have
class Module_Setup:
def __init__(self):
self.driver = webdriver.Firefox()
def teardown(self):
self.driver.close()
When I run it launches and closes browser.
But I also get error self = <CallInfo when='teardown' exception: 'NoneType' object is not callable>, func = <function <lambda> at 0x104580488>, when = 'teardown'
Also If I want to run both tests test_1 and test_2 with same driver object I need to use scope module or session?
Regarding the exception
When using request.addfinalizer(), you shall pass in reference to a function.
Your code is passing result of calling that function.
request.addfinalizer(module_setup.teardown())
You shall call it this way:
request.addfinalizer(module_setup.teardown)
Regarding fixture scope
If your fixture allows reuse across multiple test calls, use "session"
scope. If it allows reuse only for tests in one module, use "module" scope.
Alternative fixture solution
The way you use the fixtures is not much in pytest style, it rather resembles unittest.
From the code you show it seems, the only think you need is to have running Firefox with driver allowing to use it in your tests, and after being done, you need to close it.
This can be accomplished by single fixture:
#pytest.fixture(scope="session")
def firefox(request):
driver = webdriver.Firefox()
def fin():
driver.close()
request.addfinalizer(fin)
or even better using #pytest.yield_fixture
#pytest.yield_fixture(scope="session")
def firefox(request):
driver = webdriver.Firefox()
yield driver
driver.close()
The yield is place, where fixture stops executing, yields the created value (driver) to test cases.
After the tests are over (or better, when the scope of our fixture is over), it
continues running the instructions following the yield and does the cleanup
work.
In all cases, you may then modify your test cases as follows:
def test_1(firefox):
print moduleSetup
print '...'
and the moduleSetup fixture becomes completely obsolete.

Categories