I am writing tests in pytest and am using fixtures as variables.
Originally, this is how the fixtures looked:
#pytest.fixture(scope="class")
def user(request):
u = "Matt"
request.cls.u = u
return u
And then, there was another fixture to delete the user from the database once I finished with it.
In the tests, I used both fixtures like so #pytest.mark.usefixtures("user", "teardown fixture")
The teardown fixture was class scoped, until I decided to change it to session scoped, since I want to delete the user only after running all the tests.
The problem was that suddenly the teardown fixture couldn't access user, since user is class scoped.
I changed the user to session scoped, however, I am not sure how to access, or export it now.
#pytest.fixture(scope="session")
def user(request):
u = "Matt"
# request.cls.user = u -> WHAT GOES HERE INSTEAD OF THIS?
return u
User is no longer recognized in the test functions. The test is located inside of a class. The current function is something like this:
Class TestUser(OtherClassWhichInheritsFromBaseCase):
def test_user1(self, user1):
self.open("www.google.com")
print(user1)
When I try to run the code in pycharm I get the following error:
def _callTestMethod(self, method):
> method()
E TypeError: TestUser.test_user1() missing 1 required positional argument: 'user1'
Any advice?
I think you're approaching this from the wrong direction. If you need to clean up a fixture, you don't write a second fixture; you write your fixture as a context manager.
For example, you might write:
#pytest.fixture(scope="session")
def user():
u = User(name="Matt")
yield u
# cleanup goes here
And in your test code:
def test_something(user):
assert user.name == "Matt"
Here's a complete example. We start with this dummy user.py, which simply creates files to demonstrate which methods were called:
from dataclasses import dataclass
#dataclass
class User:
name: str
def commit(self):
open("commit_was_called", "w")
def delete(self):
open("delete_was_called", "w")
Then here's our test:
import pytest
import user
#pytest.fixture(scope="session")
def a_user():
u = user.User(name="testuser")
u.commit()
yield u
u.delete()
class TestUserStuff:
def test_user(self, a_user):
assert a_user.name == "testuser"
We run it like this:
$ pytest
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.10.1, pytest-6.2.4, py-1.11.0, pluggy-0.13.1
rootdir: /home/lars/tmp/python
plugins: testinfra-6.5.0
collected 1 item
test_user.py . [100%]
================================================================================================ 1 passed in 0.00s ================================================================================================
After which we can confirm that both the commit and delete methods were called:
$ ls
commit_was_called
delete_was_called
test_user.py
user.py
Related
I'm using SQLAlchemy + Ormar and I want to write clean tests as it possible to write with pytest-django:
import pytest
#pytest.mark.django_db
def test_user_count():
assert User.objects.count() == 0
I'm using FastAPI and not using Django at all so decorator as above isn't possible to use.
How to write clean tests on model with Database access as above but not with Django. It would be great to have that infrastructure for SQLAlchemy + Ormar but changing ORM is an option too.
Example of model to test:
class User(ormar.Model):
class Meta:
metadata = metadata
database = database
id: int = ormar.BigInteger(primary_key=True)
phone: str = ormar.String(max_length=100)
account: str = ormar.String(max_length=100)
I think this discussion can be useful for you https://github.com/collerek/ormar/discussions/136
Using an autouse fixture should help you:
# fixture
#pytest.fixture(autouse=True, scope="module") # adjust your scope
def create_test_database():
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.drop_all(engine) # i like to drop also before - even if test crash in the middle we start clean
metadata.create_all(engine)
yield
metadata.drop_all(engine)
# actual test - note to test async you need pytest-asyncio and mark test as asyncio
#pytest.mark.asyncio
async def test_actual_logic():
async with database: # <= note this is the same database that used in ormar Models
... (logic)
This is what I use for my standalone script (a notebook) in the root directory of the project, where manage.py resides;
import sys, os, django
# append your project to your path
sys.path.append("./<your-project>")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<your-project>.settings")
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" # for notebooks only
django.setup()
# import the model
from listings.models import Listing
However, it should be noted that Django comes with it's own unit testing. Have a look here. This will enable you to run tests with python3 manage.py test <your-test>.
There is a bit of magic happening here (but this is generally true within pytest). The #pytest.mark.django_db fixture simply marks the test, but doesn't do much else on its own. The heavy lifting happens later on inside of pytest-django, where the plugin will filter/scan for tests with that mark and add appropriate fixtures to them.
We can replicate this behavior:
# conftest.py (or inside a dedicated plugin, if you fancy)
import pytest
# register the custom marker called my_orm
def pytest_configure(config):
config.addinivalue_line(
"markers", "my_orm: This test uses my ORM to connect to my DB."
)
#pytest.fixture()
def setup_my_orm():
print("TODO: set up DB and connect.")
yield
print("TODO: tear down DB and disconnect.")
# this is where the magic happens
def pytest_runtest_setup(item):
needs_my_orm = len([marker for marker in item.iter_markers(name="my_orm")]) > 0
if needs_my_orm and "setup_my_orm" not in item.fixturenames:
item.fixturenames.append("setup_my_orm")
# test_mymodule.py
#pytest.mark.my_orm
def test_foo():
assert 0 == 0
You can check that the test indeed prints the above TODO statements via pytest -s.
Of course, you can customize this further using parameters for the marker, more sophisticated fixture scoping, etc. This should, however, put you on the right track :)
I have a BaseTest class which has tear_down and I want to have inside tear_down a variable representing wether or not the test has failed.
I tried look at A LOT of older posts but I coulden't implement them as they were hooks or mixture of hook and fixture and something did not work on my end.
What is the best practice for doing that?
Last thing I've tried was -
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item):
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
Then pass request fixture to teardown and inside use
has_failed = request.node.rep_call.failed
But request had no attributes at all, it was a method.
Also tried -
#pytest.fixture
def has_failed(request):
yield
return True if request.node.rep_call.failed else False
and pass it like that.
def teardown_method(self, has_failed):
And again, no attributes.
Isn't there a simple fixture to just do like request.test_status or something like that?
It's important that the teardown will have that bool parameter wether or not it failed and not do stuff outside the teardown.
Thanks!
There doesn't appear to be any super simple fixture offering the test report as a fixture. And I see what you mean: most examples of recording the test report are geared toward non-unittest use cases (including the official docs). However, we can adjust these examples to work with unittest TestCases.
There appears to be a private _testcase attribute on the item arg passed to pytest_runtest_makereport, which contains the instance of the TestCase. We can set an attribute on it, which can then be accessed within teardown_method.
# conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
if report.when == 'call' and hasattr(item, '_testcase'):
item._testcase.did_pass = report.passed
And here's a dinky little example TestCase
import unittest
class DescribeIt(unittest.TestCase):
def setup_method(self, method):
self.did_pass = None
def teardown_method(self, method):
print('\nself.did_pass =', self.did_pass)
def test_it_works(self):
assert True
def test_it_doesnt_work(self):
assert False
When we run it, we find it prints the proper test failure/success bool
$ py.test --no-header --no-summary -qs
============================= test session starts =============================
collected 2 items
tests/tests.py::DescribeIt::test_it_doesnt_work FAILED
self.did_pass = False
tests/tests.py::DescribeIt::test_it_works PASSED
self.did_pass = True
========================= 1 failed, 1 passed in 0.02s =========================
Long story short, I want to be able to skip some tests if the session is being run against our production API. The environment that the tests are run against is set with a command-line option.
I came across the idea of using the pytest_namespace to track global variables, so I set that up in my conftest.py file.
def pytest_namespace():
return {'global_env': ''}
I take in the command line option and set various API urls (from a config.ini file) in a fixture in conftest.py.
#pytest.fixture(scope='session', autouse=True)
def configInfo(pytestconfig):
global data
environment = pytestconfig.getoption('--ENV')
print(environment)
environment = str.lower(environment)
pytest.global_env = environment
config = configparser.ConfigParser()
config.read('config.ini') # local config file
configData = config['QA-CONFIG']
if environment == 'qa':
configData = config['QA-CONFIG']
if environment == 'prod':
configData = config['PROD-CONFIG']
(...)
Then I've got the test I want to skip, and it's decorated like so:
#pytest.mark.skipif(pytest.global_env in 'prod',
reason="feature not in Prod yet")
However, whenever I run the tests against prod, they don't get skipped. I did some fiddling around, and found that:
a) the global_env variable is accessible through another fixture
#pytest.fixture(scope="session", autouse=True)
def mod_header(request):
log.info('\n-----\n| '+pytest.global_env+' |\n-----\n')
displays correctly in my logs
b) the global_env variable is accessible in a test, correctly logging the env.
c) pytest_namespace is deprecated
So, I'm assuming this has to do with when the skipif accesses that global_env vs. when the fixtures do in the test session. I also find it non-ideal to use a deprecated functionality.
My question is:
how do I get a value from the pytest command line option into a skipif?
Is there a better way to be trying this than the pytest_namespace?
Looks like true way to Control skipping of tests according to command line option is mark tests as skip dynamically:
add option using pytest_addoption hook like this:
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
Use pytest_collection_modifyitems hook to add marker like this:
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
Add mark to you test:
#pytest.mark.slow
def test_func_slow():
pass
If you want to use the data from the CLI in a test, for example, it`s credentials, enough to specify a skip option when retrieving them from the pytestconfig:
add option using pytest_addoption hook like this:
def pytest_addoption(parser):
parser.addoption(
"--credentials",
action="store",
default=None,
help="credentials to ..."
)
use skip option when get it from pytestconfig
#pytest.fixture(scope="session")
def super_secret_fixture(pytestconfig):
credentials = pytestconfig.getoption('--credentials', skip=True)
...
use fixture as usual in you test:
def test_with_fixture(super_secret_fixture):
...
In this case you will got something like this it you not send --credentials option to CLI:
Skipped: no 'credentials' option found
It is better to use _pytest.config.get_config instead of deprecated pytest.config If you still wont to use pytest.mark.skipif like this:
#pytest.mark.skipif(not _pytest.config.get_config().getoption('--credentials'), reason="--credentials was not specified")
The problem with putting global code in fixtures is that markers are evaluated before fixtures, so when skipif is evaluated, configInfo didn't run yet and pytest.global_env will be empty. I'd suggest to move the configuration code from the fixture to pytest_configure hook:
# conftest.py
import configparser
import pytest
def pytest_addoption(parser):
parser.addoption('--ENV')
def pytest_configure(config):
environment = config.getoption('--ENV')
pytest.global_env = environment
...
The configuration hook is guaranteed to execute before the tests are collected and the markers are evaluated.
Is there a better way to be trying this than the pytest_namespace?
Some ways I know of:
Simply assign a module variable in pytest_configure (pytest.foo = 'bar', like I did in the example above).
Use the config object as it is shared throughout the test session:
def pytest_configure(config):
config.foo = 'bar'
#pytest.fixture
def somefixture(pytestconfig):
assert pytestconfig.foo == 'bar'
def test_foo(pytestconfig):
assert pytestconfig.foo == 'bar'
Outside of the fixtures/tests, you can access the config via pytest.config, for example:
#pytest.mark.skipif(pytest.config.foo == 'bar', reason='foo is bar')
def test_baz():
...
Use caching; this has an additional feature of persisting data between the test runs:
def pytest_configure(config):
config.cache.set('foo', 'bar')
#pytest.fixture
def somefixture(pytestconfig):
assert pytestconfig.cache.get('foo', None)
def test_foo(pytestconfig):
assert pytestconfig.cache.get('foo', None)
#pytest.mark.skipif(pytest.config.cache.get('foo', None) == 'bar', reason='foo is bar')
def test_baz():
assert True
When using 1. or 2., make sure you don't unintentionally overwrite pytest stuff with your own data; prefixing your own variables with a unique name is a good idea. When using caching, you don't have this problem.
So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"
I am using py.test and wonder if/how it is possible to retrieve the name of the currently executed test within the setup method that is invoked before running each test. Consider this code:
class TestSomething(object):
def setup(self):
test_name = ...
def teardown(self):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
Right before TestSomething.test_the_power becomes executed, I would like to have access to this name in setup as outlined in the code via test_name = ... so that test_name == "TestSomething.test_the_power".
Actually, in setup, I allocate some resource for each test. In the end, looking at the resources that have been created by various unit tests, I would like to be able to see which one was created by which test. Best thing would be to just use the test name upon creation of the resource.
You can also do this using the Request Fixture like this:
def test_name1(request):
testname = request.node.name
assert testname == 'test_name1'
You can also use the PYTEST_CURRENT_TEST environment variable set by pytest for each test case.
PYTEST_CURRENT_TEST environment variable
To get just the test name:
os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
The setup and teardown methods seem to be legacy methods for supporting tests written for other frameworks, e.g. nose. The native pytest methods are called setup_method as well as teardown_method which receive the currently executed test method as an argument. Hence, what I want to achieve, can be written like so:
class TestSomething(object):
def setup_method(self, method):
print "\n%s:%s" % (type(self).__name__, method.__name__)
def teardown_method(self, method):
pass
def test_the_power(self):
assert "foo" != "bar"
def test_something_else(self):
assert True
The output of py.test -s then is:
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
plugins: cov
collected 2 items
test_pytest.py
TestSomething:test_the_power
.
TestSomething:test_something_else
.
=========================== 2 passed in 0.03 seconds ===========================
Short answer:
Use fixture called request
This fixture has the following interesting attributes:
request.node.originalname = the name of the function/method
request.node.name = name of the function/method and ids of the parameters
request.node.nodeid = relative path to the test file, name of the test class (if in a class), name of the function/method and ids of the parameters
Long answer:
I inspected the content of request.node. Here are the most interesting attributes I found:
class TestClass:
#pytest.mark.parametrize("arg", ["a"])
def test_stuff(self, request, arg):
print("originalname:", request.node.originalname)
print("name:", request.node.name)
print("nodeid:", request.node.nodeid)
Prints the following:
originalname: test_stuff
name: test_stuff[a]
nodeid: relative/path/to/test_things.py::TestClass::test_stuff[a]
NodeID is the most promising if you want to completely identify the test (including the parameters). Note that if the test is as a function (instead of in a class), the class name (::TestClass) is simply missing.
You can parse nodeid as you wish, for example:
components = request.node.nodeid.split("::")
filename = components[0]
test_class = components[1] if len(components) == 3 else None
test_func_with_params = components[-1]
test_func = test_func_with_params.split('[')[0]
test_params = test_func_with_params.split('[')[1][:-1].split('-')
In my example this results to:
filename = 'relative/path/to/test_things.py'
test_class = 'TestClass'
test_func = 'test_stuff'
test_params = ['a']
# content of conftest.py
#pytest.fixture(scope='function', autouse=True)
def test_log(request):
# Here logging is used, you can use whatever you want to use for logs
log.info("STARTED Test '{}'".format(request.node.name))
def fin():
log.info("COMPLETED Test '{}' \n".format(request.node.name))
request.addfinalizer(fin)
Try my little wrapper function which returns the full name of the test, the file and the test name. You can use whichever you like later.
I used it within conftest.py where fixtures do not work as far as I know.
def get_current_test():
full_name = os.environ.get('PYTEST_CURRENT_TEST').split(' ')[0]
test_file = full_name.split("::")[0].split('/')[-1].split('.py')[0]
test_name = full_name.split("::")[1]
return full_name, test_file, test_name
You might have multiple tests, in which case...
test_names = [n for n in dir(self) if n.startswith('test_')]
...will give you all the functions and instance variables that begin with "test_" in self. As long as you don't have any variables named "test_something" this will work.
You can also define a method setup_method(self, method) instead of setup(self) and that will be called before each test method invocation. Using this, you're simply given each method as a parameter. See: http://pytest.org/latest/xunit_setup.html
You could give the inspect module are try.
import inspect
def foo():
print "My name is: ", inspect.stack()[0][3]
foo()
Output: My name is: foo
Try type(self).__name__ perhaps?