How to write clean tests on model with database access - python

I'm using SQLAlchemy + Ormar and I want to write clean tests as it possible to write with pytest-django:
import pytest
#pytest.mark.django_db
def test_user_count():
assert User.objects.count() == 0
I'm using FastAPI and not using Django at all so decorator as above isn't possible to use.
How to write clean tests on model with Database access as above but not with Django. It would be great to have that infrastructure for SQLAlchemy + Ormar but changing ORM is an option too.
Example of model to test:
class User(ormar.Model):
class Meta:
metadata = metadata
database = database
id: int = ormar.BigInteger(primary_key=True)
phone: str = ormar.String(max_length=100)
account: str = ormar.String(max_length=100)

I think this discussion can be useful for you https://github.com/collerek/ormar/discussions/136
Using an autouse fixture should help you:
# fixture
#pytest.fixture(autouse=True, scope="module") # adjust your scope
def create_test_database():
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.drop_all(engine) # i like to drop also before - even if test crash in the middle we start clean
metadata.create_all(engine)
yield
metadata.drop_all(engine)
# actual test - note to test async you need pytest-asyncio and mark test as asyncio
#pytest.mark.asyncio
async def test_actual_logic():
async with database: # <= note this is the same database that used in ormar Models
... (logic)

This is what I use for my standalone script (a notebook) in the root directory of the project, where manage.py resides;
import sys, os, django
# append your project to your path
sys.path.append("./<your-project>")
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<your-project>.settings")
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" # for notebooks only
django.setup()
# import the model
from listings.models import Listing
However, it should be noted that Django comes with it's own unit testing. Have a look here. This will enable you to run tests with python3 manage.py test <your-test>.

There is a bit of magic happening here (but this is generally true within pytest). The #pytest.mark.django_db fixture simply marks the test, but doesn't do much else on its own. The heavy lifting happens later on inside of pytest-django, where the plugin will filter/scan for tests with that mark and add appropriate fixtures to them.
We can replicate this behavior:
# conftest.py (or inside a dedicated plugin, if you fancy)
import pytest
# register the custom marker called my_orm
def pytest_configure(config):
config.addinivalue_line(
"markers", "my_orm: This test uses my ORM to connect to my DB."
)
#pytest.fixture()
def setup_my_orm():
print("TODO: set up DB and connect.")
yield
print("TODO: tear down DB and disconnect.")
# this is where the magic happens
def pytest_runtest_setup(item):
needs_my_orm = len([marker for marker in item.iter_markers(name="my_orm")]) > 0
if needs_my_orm and "setup_my_orm" not in item.fixturenames:
item.fixturenames.append("setup_my_orm")
# test_mymodule.py
#pytest.mark.my_orm
def test_foo():
assert 0 == 0
You can check that the test indeed prints the above TODO statements via pytest -s.
Of course, you can customize this further using parameters for the marker, more sophisticated fixture scoping, etc. This should, however, put you on the right track :)

Related

Use Session Scoped Fixture as Variable

I am writing tests in pytest and am using fixtures as variables.
Originally, this is how the fixtures looked:
#pytest.fixture(scope="class")
def user(request):
u = "Matt"
request.cls.u = u
return u
And then, there was another fixture to delete the user from the database once I finished with it.
In the tests, I used both fixtures like so #pytest.mark.usefixtures("user", "teardown fixture")
The teardown fixture was class scoped, until I decided to change it to session scoped, since I want to delete the user only after running all the tests.
The problem was that suddenly the teardown fixture couldn't access user, since user is class scoped.
I changed the user to session scoped, however, I am not sure how to access, or export it now.
#pytest.fixture(scope="session")
def user(request):
u = "Matt"
# request.cls.user = u -> WHAT GOES HERE INSTEAD OF THIS?
return u
User is no longer recognized in the test functions. The test is located inside of a class. The current function is something like this:
Class TestUser(OtherClassWhichInheritsFromBaseCase):
def test_user1(self, user1):
self.open("www.google.com")
print(user1)
When I try to run the code in pycharm I get the following error:
def _callTestMethod(self, method):
> method()
E TypeError: TestUser.test_user1() missing 1 required positional argument: 'user1'
Any advice?
I think you're approaching this from the wrong direction. If you need to clean up a fixture, you don't write a second fixture; you write your fixture as a context manager.
For example, you might write:
#pytest.fixture(scope="session")
def user():
u = User(name="Matt")
yield u
# cleanup goes here
And in your test code:
def test_something(user):
assert user.name == "Matt"
Here's a complete example. We start with this dummy user.py, which simply creates files to demonstrate which methods were called:
from dataclasses import dataclass
#dataclass
class User:
name: str
def commit(self):
open("commit_was_called", "w")
def delete(self):
open("delete_was_called", "w")
Then here's our test:
import pytest
import user
#pytest.fixture(scope="session")
def a_user():
u = user.User(name="testuser")
u.commit()
yield u
u.delete()
class TestUserStuff:
def test_user(self, a_user):
assert a_user.name == "testuser"
We run it like this:
$ pytest
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.10.1, pytest-6.2.4, py-1.11.0, pluggy-0.13.1
rootdir: /home/lars/tmp/python
plugins: testinfra-6.5.0
collected 1 item
test_user.py . [100%]
================================================================================================ 1 passed in 0.00s ================================================================================================
After which we can confirm that both the commit and delete methods were called:
$ ls
commit_was_called
delete_was_called
test_user.py
user.py

Using a command-line option in a pytest skip-if condition

Long story short, I want to be able to skip some tests if the session is being run against our production API. The environment that the tests are run against is set with a command-line option.
I came across the idea of using the pytest_namespace to track global variables, so I set that up in my conftest.py file.
def pytest_namespace():
return {'global_env': ''}
I take in the command line option and set various API urls (from a config.ini file) in a fixture in conftest.py.
#pytest.fixture(scope='session', autouse=True)
def configInfo(pytestconfig):
global data
environment = pytestconfig.getoption('--ENV')
print(environment)
environment = str.lower(environment)
pytest.global_env = environment
config = configparser.ConfigParser()
config.read('config.ini') # local config file
configData = config['QA-CONFIG']
if environment == 'qa':
configData = config['QA-CONFIG']
if environment == 'prod':
configData = config['PROD-CONFIG']
(...)
Then I've got the test I want to skip, and it's decorated like so:
#pytest.mark.skipif(pytest.global_env in 'prod',
reason="feature not in Prod yet")
However, whenever I run the tests against prod, they don't get skipped. I did some fiddling around, and found that:
a) the global_env variable is accessible through another fixture
#pytest.fixture(scope="session", autouse=True)
def mod_header(request):
log.info('\n-----\n| '+pytest.global_env+' |\n-----\n')
displays correctly in my logs
b) the global_env variable is accessible in a test, correctly logging the env.
c) pytest_namespace is deprecated
So, I'm assuming this has to do with when the skipif accesses that global_env vs. when the fixtures do in the test session. I also find it non-ideal to use a deprecated functionality.
My question is:
how do I get a value from the pytest command line option into a skipif?
Is there a better way to be trying this than the pytest_namespace?
Looks like true way to Control skipping of tests according to command line option is mark tests as skip dynamically:
add option using pytest_addoption hook like this:
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
Use pytest_collection_modifyitems hook to add marker like this:
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
Add mark to you test:
#pytest.mark.slow
def test_func_slow():
pass
If you want to use the data from the CLI in a test, for example, it`s credentials, enough to specify a skip option when retrieving them from the pytestconfig:
add option using pytest_addoption hook like this:
def pytest_addoption(parser):
parser.addoption(
"--credentials",
action="store",
default=None,
help="credentials to ..."
)
use skip option when get it from pytestconfig
#pytest.fixture(scope="session")
def super_secret_fixture(pytestconfig):
credentials = pytestconfig.getoption('--credentials', skip=True)
...
use fixture as usual in you test:
def test_with_fixture(super_secret_fixture):
...
In this case you will got something like this it you not send --credentials option to CLI:
Skipped: no 'credentials' option found
It is better to use _pytest.config.get_config instead of deprecated pytest.config If you still wont to use pytest.mark.skipif like this:
#pytest.mark.skipif(not _pytest.config.get_config().getoption('--credentials'), reason="--credentials was not specified")
The problem with putting global code in fixtures is that markers are evaluated before fixtures, so when skipif is evaluated, configInfo didn't run yet and pytest.global_env will be empty. I'd suggest to move the configuration code from the fixture to pytest_configure hook:
# conftest.py
import configparser
import pytest
def pytest_addoption(parser):
parser.addoption('--ENV')
def pytest_configure(config):
environment = config.getoption('--ENV')
pytest.global_env = environment
...
The configuration hook is guaranteed to execute before the tests are collected and the markers are evaluated.
Is there a better way to be trying this than the pytest_namespace?
Some ways I know of:
Simply assign a module variable in pytest_configure (pytest.foo = 'bar', like I did in the example above).
Use the config object as it is shared throughout the test session:
def pytest_configure(config):
config.foo = 'bar'
#pytest.fixture
def somefixture(pytestconfig):
assert pytestconfig.foo == 'bar'
def test_foo(pytestconfig):
assert pytestconfig.foo == 'bar'
Outside of the fixtures/tests, you can access the config via pytest.config, for example:
#pytest.mark.skipif(pytest.config.foo == 'bar', reason='foo is bar')
def test_baz():
...
Use caching; this has an additional feature of persisting data between the test runs:
def pytest_configure(config):
config.cache.set('foo', 'bar')
#pytest.fixture
def somefixture(pytestconfig):
assert pytestconfig.cache.get('foo', None)
def test_foo(pytestconfig):
assert pytestconfig.cache.get('foo', None)
#pytest.mark.skipif(pytest.config.cache.get('foo', None) == 'bar', reason='foo is bar')
def test_baz():
assert True
When using 1. or 2., make sure you don't unintentionally overwrite pytest stuff with your own data; prefixing your own variables with a unique name is a good idea. When using caching, you don't have this problem.

How to dynamically add new fixtures to a test based on the fixture signature of a test

So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"

Set up and tear down functions for pytest bdd

I'm trying to do setup and teardown modules using pytest-bdd.
I know with behave you can do a environment.py file with before_all and after_all modules. How do I do this in pytest-bdd
I have looked into "classic xunit-style setup" plugin and it didn't work when I tried it. (I know thats more related to py-test and not py-test bdd).
You could just declare a pytest.fixture with autouse=true and whatever scope you want. You can then use the request fixture to specify the teardown. E.g.:
#pytest.fixture(autouse=True, scope='module')
def setup(request):
# Setup code
def fin():
# Teardown code
request.addfinalizer(fin)
A simple-ish approach for me is to use a trivial fixture.
# This declaration can go in the project's confest.py:
#pytest.fixture
def context():
class Context(object):
pass
return Context()
#given('some given step')
def some_when_step(context):
context.uut = ...
#when('some when step')
def some_when_step(context):
context.result = context.uut...
Note: confest.py allows you to share fixtures between codes, and putting everything in one file gives me a static analysis warning anyway.
"pytest supports execution of fixture specific finalization code when the fixture goes out of scope. By using a yield statement instead of return, all the code after the yield statement serves as the teardown code:"
See: https://docs.pytest.org/en/latest/fixture.html
eg.
#pytest.fixture(scope="module")
def smtp_connection():
smtp_connection = smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
yield smtp_connection # provide the fixture value
print("teardown smtp")
smtp_connection.close()

How do you unit test a Celery task?

The Celery documentation mentions testing Celery within Django but doesn't explain how to test a Celery task if you are not using Django. How do you do this?
It is possible to test tasks synchronously using any unittest lib out there. I normaly do 2 different test sessions when working with celery tasks. The first one (as I'm suggesting bellow) is completely synchronous and should be the one that makes sure the algorithm does what it should do. The second session uses the whole system (including the broker) and makes sure I'm not having serialization issues or any other distribution, comunication problem.
So:
from celery import Celery
celery = Celery()
#celery.task
def add(x, y):
return x + y
And your test:
from nose.tools import eq_
def test_add_task():
rst = add.apply(args=(4, 4)).get()
eq_(rst, 8)
Here is an update to my seven years old answer:
You can run a worker in a separate thread via a pytest fixture:
https://docs.celeryq.dev/en/v5.2.6/userguide/testing.html#celery-worker-embed-live-worker
According to the docs, you should not use "always_eager" (see the top of the page of the above link).
Old answer:
I use this:
with mock.patch('celeryconfig.CELERY_ALWAYS_EAGER', True, create=True):
...
Docs: https://docs.celeryq.dev/en/3.1/configuration.html#celery-always-eager
CELERY_ALWAYS_EAGER lets you run your task synchronously, and you don't need a celery server.
Depends on what exactly you want to be testing.
Test the task code directly. Don't call "task.delay(...)" just call "task(...)" from your unit tests.
Use CELERY_ALWAYS_EAGER. This will cause your tasks to be called immediately at the point you say "task.delay(...)", so you can test the whole path (but not any asynchronous behavior).
For those on Celery 4 it's:
#override_settings(CELERY_TASK_ALWAYS_EAGER=True)
Because the settings names have been changed and need updating if you choose to upgrade, see
https://docs.celeryproject.org/en/latest/history/whatsnew-4.0.html?highlight=what%20is%20new#lowercase-setting-names
unittest
import unittest
from myproject.myapp import celeryapp
class TestMyCeleryWorker(unittest.TestCase):
def setUp(self):
celeryapp.conf.update(CELERY_ALWAYS_EAGER=True)
py.test fixtures
# conftest.py
from myproject.myapp import celeryapp
#pytest.fixture(scope='module')
def celery_app(request):
celeryapp.conf.update(CELERY_ALWAYS_EAGER=True)
return celeryapp
# test_tasks.py
def test_some_task(celery_app):
...
Addendum: make send_task respect eager
from celery import current_app
def send_task(name, args=(), kwargs={}, **opts):
# https://github.com/celery/celery/issues/581
task = current_app.tasks[name]
return task.apply(args, kwargs, **opts)
current_app.send_task = send_task
As of Celery 3.0, one way to set CELERY_ALWAYS_EAGER in Django is:
from django.test import TestCase, override_settings
from .foo import foo_celery_task
class MyTest(TestCase):
#override_settings(CELERY_ALWAYS_EAGER=True)
def test_foo(self):
self.assertTrue(foo_celery_task.delay())
Since Celery v4.0, py.test fixtures are provided to start a celery worker just for the test and are shut down when done:
def test_myfunc_is_executed(celery_session_worker):
# celery_session_worker: <Worker: gen93553#mymachine.local (running)>
assert myfunc.delay().wait(3)
Among other fixtures described on http://docs.celeryproject.org/en/latest/userguide/testing.html#py-test, you can change the celery default options by redefining the celery_config fixture this way:
#pytest.fixture(scope='session')
def celery_config():
return {
'accept_content': ['json', 'pickle'],
'result_serializer': 'pickle',
}
By default, the test worker uses an in-memory broker and result backend. No need to use a local Redis or RabbitMQ if not testing specific features.
reference
using pytest.
def test_add(celery_worker):
mytask.delay()
if you use flask, set the app config
CELERY_BROKER_URL = 'memory://'
CELERY_RESULT_BACKEND = 'cache+memory://'
and in conftest.py
#pytest.fixture
def app():
yield app # Your actual Flask application
#pytest.fixture
def celery_app(app):
from celery.contrib.testing import tasks # need it
yield celery_app # Your actual Flask-Celery application
In my case (and I assume many others), all I wanted was to test the inner logic of a task using pytest.
TL;DR; ended up mocking everything away (OPTION 2)
Example Use Case:
proj/tasks.py
#shared_task(bind=True)
def add_task(self, a, b):
return a+b;
tests/test_tasks.py
from proj import add_task
def test_add():
assert add_task(1, 2) == 3, '1 + 2 should equal 3'
but, since shared_task decorator does a lot of celery internal logic, it isn't really a unit tests.
So, for me, there were 2 options:
OPTION 1: Separate internal logic
proj/tasks_logic.py
def internal_add(a, b):
return a + b;
proj/tasks.py
from .tasks_logic import internal_add
#shared_task(bind=True)
def add_task(self, a, b):
return internal_add(a, b);
This looks very odd, and other than making it less readable, it requires to manually extract and pass attributes that are part of the request, for instance the task_id in case you need it, which make the logic less pure.
OPTION 2: mocks
mocking away celery internals
tests/__init__.py
# noinspection PyUnresolvedReferences
from celery import shared_task
from mock import patch
def mock_signature(**kwargs):
return {}
def mocked_shared_task(*decorator_args, **decorator_kwargs):
def mocked_shared_decorator(func):
func.signature = func.si = func.s = mock_signature
return func
return mocked_shared_decorator
patch('celery.shared_task', mocked_shared_task).start()
which then allows me to mock the request object (again, in case you need things from the request, like the id, or the retries counter.
tests/test_tasks.py
from proj import add_task
class MockedRequest:
def __init__(self, id=None):
self.id = id or 1
class MockedTask:
def __init__(self, id=None):
self.request = MockedRequest(id=id)
def test_add():
mocked_task = MockedTask(id=3)
assert add_task(mocked_task, 1, 2) == 3, '1 + 2 should equal 3'
This solution is much more manual, but, it gives me the control I need to actually unit test, without repeating myself, and without losing the celery scope.
I see a lot of CELERY_ALWAYS_EAGER = true in unit tests methods as a solution for unit tests, but since the version 5.0.5 is available there are a lot of changes which makes most of the old answers deprecated and for me a time consuming nonsense, so for everyone here searching a Solution, go to the Doc and read the well documented unit test examples for the new Version:
https://docs.celeryproject.org/en/stable/userguide/testing.html
And to the Eager Mode with Unit Tests, here a quote from the actual docs:
Eager mode
The eager mode enabled by the task_always_eager setting is by
definition not suitable for unit tests.
When testing with eager mode you are only testing an emulation of what
happens in a worker, and there are many discrepancies between the
emulation and what happens in reality.
Another option is to mock the task if you do not need the side effects of running it.
from unittest import mock
#mock.patch('module.module.task')
def test_name(self, mock_task): ...

Categories