I am trying to build a pytest plugin that utilizes the pytest_runtest_logreport in order to invoke some code everytime a test fails. I'd like to gate this plugin with a CLI argument I have added using the pytest_addoption hook. Unfortunately I can't seem to figure out how to access the pytest session state or arguments inside the pytest_runtest_logreport hook. Is there a way to do this? I don't see it in the hookspec.
You can't get the session from the standard TestReport object. However, you can introduce a custom wrapper around the pytest_runtest_makereport hook (the one that creates the report object), where you can attach the session yourself. Example:
import pytest
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
out = yield
report = out.get_result()
report.session = item.session
def pytest_runtest_logreport(report):
print(report.session)
Another example of passing state between hooks is a plugin class. Example with accessing config object in pytest_runtest_logreport:
import pytest
#pytest.mark.tryfirst
def pytest_configure(config):
p = MyPlugin(config)
config.pluginmanager.register(p, 'my_plugin')
class MyPlugin:
def __init__(self, config):
self.config = config
def pytest_runtest_logreport(self, report):
print(report, self.config)
Related
I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.
I'm working on unit tests for a service I made that uses confluent-kafka. The goal is to test successful function calls, exception errors, etc. The problem I'm running into is since I'm instantiating the client in the constructor of my service the tests are failing since I'm unsure how to patch a constructor. My question is how do I mock my service in order to properly test its functionality.
Example_Service.py:
from confluent_kafka.schema_registry import SchemaRegistryClient
class ExampleService:
def __init__(self, config):
self.service = SchemaRegistryClient(config)
def get_schema(self):
return self.service.get_schema()
Example_Service_tests.py
from unittest import mock
#mock.patch.object(SchemaRegistryClient, "get_schema")
def test_get_schema_success(mock_client):
schema_Id = ExampleService.get_schema()
mock_service.assert_called()
The problem is that you aren't creating an instance of ExampleService; __init__ never gets called.
You can avoid patching anything by allowing your class to accept a client maker as an argument (which can default to SchemaRegistryClient:
class ExampleService:
def __init__(self, config, *, client_factory=SchemaRegistryClient):
self.service = client_factory(config)
...
Then in your test, you can simply pass an appropriate stub as an argument:
def test_get_schema_success():
mock_client = Mock()
schema_Id = ExampleService(some_config, client_factory=mock_client)
mock_client.assert_called()
Two ways
mock entire class using #mock.patch(SchemaRegistryClient) OR
replace #mock.patch.object(SchemaRegistryClient, "get_schema") with
#mock.patch.object(SchemaRegistryClient, "__init__")
#mock.patch.object(SchemaRegistryClient, "get_schema")
I have a following simplified code example:
from urllib.parse import urlparse, parse_qs
from selenium import webdriver
import pytest
#pytest.fixture(scope="module")
def driver():
options = webdriver.ChromeOptions()
_driver = webdriver.Chrome(options=options)
yield _driver
_driver.close()
#pytest.mark.parametrize("instance_name", ["instance1", "instance2"])
class TestInstance:
#pytest.fixture
def authorization_code(self, instance_name, driver):
driver.get(f"https://{instance_name}.com")
...do some UI actions here
authorization_code = parse_qs(urlparse(redirected_url).query)["code"][0]
#pytest.fixture
def access_token(self, authorization_code):
...obtain access_token here using authorization code
return "access_token"
def test_case_1(self, access_token):
...do some API calls using access_token
def test_case_2(self, access_token):
...do some API calls using access_token
What I would like to do is to execute UI actions in the authorization_code function once and obtain one access_token per instance.
Currently my UI actions are executed for every test case, leading to the fact that UI actions actually execute 2 * 2 = 4 times.
Is it possible to do with pytest?
Or maybe I am missing something in my setup?
In general I would just change the fixture scope: currently it gets recreated every time it is called, hence the reuse of ui actions. This is by design to ensure fixtures are clean. If your fixture didn't depend on the function-level fixture instance you could just put scope="class". (See the docs on scopes).
In this case I'd be tempted to handle the caching myself:
import pytest
from datetime import datetime
#pytest.mark.parametrize("instance", ("instance1", "instance2"))
class TestClass:
#pytest.fixture()
def authcode(self, instance, authcodes={}, which=[0]):
if not instance in authcodes:
authcodes[
instance
] = f"authcode {which[0]} for {instance} at {datetime.now()}"
which[0] += 1
return authcodes[instance]
def test1(self, authcode):
print(authcode)
def test2(self, authcode):
print(authcode)
(which is just used to prove that we don't regenerate the fixture).
This feels inelegant and I'm open to better ways of doing it.
I'm currently using pytest_addoption to run my API tests, so the tests should run against the environment the user uses on the command line. In my test file, I'm trying to instantiate the UsersSupport class just once, passing the env argument. My code:
conftest.py
import pytest
# Environments
QA1 = 'https://qa1.company.com'
LOCALHOST = 'https://localhost'
def pytest_addoption(parser):
parser.addoption(
'--env',
action='store',
default='qa1'
)
#pytest.fixture(scope='class')
def env(request):
cmd_option = request.config.getoption('env')
if cmd_option == 'qa1':
chosen_env = QA1
elif cmd_option == 'localhost':
chosen_env = LOCALHOST
else:
raise UnboundLocalError('"--env" command line must use "qa1", "localhost"')
return chosen_env
users_support.py
import requests
class UsersSupport:
def __init__(self, env):
self.env = env
self.users_endpoint = '/api/v1/users'
def create_user(self, payload):
response = requests.post(
url=f'{self.env}{self.users_endpoint}',
json=payload,
)
return response
post_create_user_test.py
import pytest
from faker import Faker
from projects import UsersSupport
from projects import users_payload
class TestCreateUser:
#pytest.fixture(autouse=True, scope='class')
def setup_class(self, env):
self.users_support = UsersSupport(env)
self.fake = Faker()
self.create_user_payload = users_payload.create_user_payload
def test_create_user(self):
created_user_res = self.users_support.create_user(
payload=self.create_user_payload
).json()
print(created_user_res)
The issue
When I run pytest projects/tests/post_create_user_test.py --env qa1 I'm getting AttributeError: 'TestCreateUser' object has no attribute 'users_support' error, but if I remove the scope from setup_class method, this method run on every method and not on all methods.
How can I use the env fixture in the setup_class and instantiate the UsersSupport class to use in all methods?
If you use a fixture with class scope, the self parameter does not refer to the class instance. You can, however, still access the class itself by using self.__class__, so you can make class variables from your instance variables.
Your code could look like this:
import pytest
from faker import Faker
from projects import UsersSupport
from projects import users_payload
class TestCreateUser:
#pytest.fixture(autouse=True, scope='class')
def setup_class(self, env):
self.__class__.users_support = UsersSupport(env)
self.__class__.fake = Faker()
self.__class__.create_user_payload = users_payload.create_user_payload
def test_create_user(self):
created_user_res = self.users_support.create_user(
payload=self.create_user_payload
).json() # now you access the class variable
print(created_user_res)
During the test, a new test instance is created for each test.
If you have a default function scoped fixture, it will be called within the same instance of the test, so that the self arguments of the fixture and the current test refer to the same instance.
In the case of a class scoped fixture, the setup code is run in a separate instance before the test instances are created - this instance has to live until the end of all tests to be able to execute teardown code, so it is different to all test instances. As it is still an instance of the same test class, you can store your variables in the test class in this case.
So what I would like to achieve is mocking functions in various modules automatically with pytest. So I defined this in my conftest.py:
import sys
import __builtin__
from itertools import chain
# Fixture factory magic START
NORMAL_MOCKS = [
"logger", "error", "logging", "base_error", "partial"]
BUILTIN_MOCKS = ["exit"]
def _mock_factory(name, builtin):
def _mock(monkeypatch, request):
module = __builtin__ if builtin else request.node.module.MODULE
ret = Mock()
monkeypatch.setattr(module, name, ret)
return ret
return _mock
iterable = chain(
((el, False) for el in NORMAL_MOCKS),
((el, True) for el in BUILTIN_MOCKS))
for name, builtin in iterable:
fname = "mock_{name}".format(name=name)
_tmp_fn = pytest.fixture(name=fname)(_mock_factory(name, builtin))
_tmp_fn.__name__ = fname
setattr(
sys.modules[__name__],
"mock_{name}".format(name=name), _tmp_fn)
# Fixture normal factory magic END
This works and all, but I would like to omit the usage of the NORMAL_MOCKS and BUILTIN_MOCKS lists. So basically in a pytest hook I should be able to see that say there is a mock_foo fixture, but it's not registered yet, so I create a mock for it with the factory and register it. I just couldn't figure out how to do this. Basically I was looking into the pytest_runtest_setup function, but could not figure out how to do the actual fixture registration. So basically I would like to know with which hook/call can I register new fixture functions programatically from this hook.
One of the ways is to parameterize the tests at the collection/generation stage, i.e. before the test execution begins: https://docs.pytest.org/en/latest/example/parametrize.html
# conftest.py
import pytest
def mock_factory(name):
return name
def pytest_generate_tests(metafunc):
for name in metafunc.fixturenames:
if name.startswith('mock_'):
metafunc.parametrize(name, [mock_factory(name[5:])])
# test_me.py
def test_me(request, mock_it):
print(mock_it)
A very simple solution. But the downside is that the test is reported as parametrized when it actually is not:
$ pytest -s -v -ra
====== test session starts ======
test_me.py::test_me[it] PASSED
====== 1 passed in 0.01 seconds ======
To fully simulate the function args without the parametrization, you can make a less obvious trick:
# conftest.py
import pytest
def mock_factory(name):
return name
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item, nextitem):
for name in item.fixturenames:
if name.startswith('mock_') and name not in item.funcargs:
item.funcargs[name] = mock_factory(name[5:])
yield
The pytest_runtest_setup hook is also a good place for this, as long as I've just tried.
Note that you do not register the fixture in that case. It is too late for the fixture registration, as all the fixtures are gathered and prepared much earlier at the collection/parametrization stages. In this stage, you can only execute the tests and provide the values. It is your responsibility to calculate the fixture values and to destroy them afterward.
The snippet below is a pragmatic solution to "how to dynamically add fixtures".
Disclaimer: I don't have expertise on pytest. I'm not saying this is what pytest was designed for, I just looked at the source code and came up with this and it seems to work. The fact that I use "private" attributes means it might not work with all versions (currently I'm on pytest 7.1.3)
from _pytest.fixtures import FixtureDef
from _pytest.fixtures import SubRequest
import pytest
#pytest.fixture(autouse=True) # autouse is relevant, as then the fixture registration happens in-time. It's too late if requiring the fixture without autouse e.g. like `#pytest.mark.usefixtures("add_fixture_dynamically")`
def add_fixture_dynamically(request: SubRequest):
"""
Conditionally and dynamically adds another fixture. It's conditional on the presence of:
#pytest.mark.my_mark()
"""
marker = request.node.get_closest_marker("my_mark")
# don't register fixture if marker is not present:
if marker is None:
return
def your_fixture(): # the name of the fixture must match the parameter name, like other fixtures
return "hello"
# register the fixture just-in-time
request._fixturemanager._arg2fixturedefs[your_fixture.__name__] = [
FixtureDef(
argname=your_fixture.__name__,
func=your_fixture,
scope="function",
fixturemanager=request._fixturemanager,
baseid=None,
params=None,
),
]
yield # runs the test. Could be wrapped in try/except/finally
# suppress warning (works if this and `add_fixture_dynamically` are in `conftest.py`)
def pytest_configure(config):
"""Prevents printing of the warning 'PytestUnknownMarkWarning: Unknown pytest.mark.<fixture_name>'"""
config.addinivalue_line("markers", "my_mark")
#pytest.mark.my_mark()
def test_adding_fixture_dynamically(your_fixture):
assert your_fixture == "hello"