Pytest-django dependency injection - python

How does pytest-django know whether to inject a test function with a RequestFactory or Client instance?
def test_with_client(client):
response = client.get('/')
assert response.content == 'Foobar'
def test_details(rf):
request = rf.get('/customer/details')
response = my_view(request)
assert response.status_code == 200
In other words: how can you make sure the input fixture is of a certain type?

pytest doesn't inject based on type but on name. The name of the input parameter is matched to registered fixtures.
See the docs here, but in short
import pytest
#pytest.fixture
def connection():
return Connection()
def test_my_object(connection):
target = MyObject(connection)
assert ...
You can use type annotations to help PyCharm etc infer the correct type, but these are not used by pytest.

Short answer: you shouldn't be running these checks for each test. Using test argument names to determine which fixtures are injected is a core component of pytest, and littering each test which uses fixtures with assert isinstance(my_fixture, MyFixtureType) for each fixture is redundant.
pytest-django is already testing that the client and rf fixtures are of the correct type:
def test_client(client):
assert isinstance(client, Client)
...
def test_rf(rf):
assert isinstance(rf, RequestFactory)

Edit:
As you've passed the fixture as param to the test method, if you got the fixture name correctly, you don't have to check anything.
Here's an example:
#pytest.fixture(scope='session')
def factory():
return RequestFactory(HTTP_X_REQUESTED_WITH='XMLHttpRequest')
#pytest.fixture(scope='session')
def client():
return Client(HTTP_X_REQUESTED_WITH='XMLHttpRequest')
Now, in your test method you can take one or both of the fixtures and work with them e.g.:
def test_foo(client):
# Do stuff
def test_bar(factory):
# Do stuff
Original answer:
You can check for the type of the input fixture using isinstance.
from django.test import RequestFactory, Client
Inside the test method test for Client:
if isinstance(client, Client):
# This is a Client instance
Similarly for RequestFactory:
if instance(rf, RequestFactory):
# This is a RequestFactory instance

Related

How to add depedency overriding in FastAPI testing

I'm new to FastAPI, I have implemented everything but when it comes to testing the API I can't override a dependency.
Here is my code:
test_controller.py
import pytest
from starlette.testclient import TestClient
from app.main import app
from app.core.manager_imp import ManagerImp
#pytest.fixture()
def client():
with TestClient(app) as test_client:
yield test_client
async def over_create_record():
return {"msg": "inserted successfully"}
app.dependency_overrides[ManagerImp.create_record] = over_create_record
def test_post(client):
data = {"name": "John", "email": "john#abc.com"}
response = client.post("/person/", json=data)
assert response.status_code == 200
assert response.json() == {"msg": "inserted successfully"}
controller.py
from app.controllers.v1.controller import Controller
from fastapi import status, HTTPException
from app.models.taxslip import Person
from app.core.manager_imp import ManagerImp
from app.core.duplicate_exception import DuplicateException
from fastapi_utils.cbv import cbv
from fastapi_utils.inferring_router import InferringRouter
router = InferringRouter(tags=["Person"])
#cbv(router)
class ControllerImp(Controller):
manager = ManagerImp()
#router.post("/person/")
async def create_record(self, person: Person):
"""
Person: A person object
returns response if the person was inserted into the database
"""
try:
response = await self.manager.create_record(person.dict())
return response
except DuplicateException as e:
return e
manager_imp.py
from fastapi import HTTPException, status
from app.database.database_imp import DatabaseImp
from app.core.manager import Manager
from app.core.duplicate_exception import DuplicateException
class ManagerImp(Manager):
database = DatabaseImp()
async def create_record(self, taxslip: dict):
try:
response = await self.database.add(taxslip)
return response
except DuplicateException:
raise HTTPException(409, "Duplicate data")
In testing I want to override create_record function from ManagerImp class so that I could get this response {"msg": "inserted successfully"}. Basically, I want to mock ManagerImp create_record function. I have tried as you can see in test_controller.py but I still get the original response.
You're not using the dependency injection system to get the ManagerImp.create_record function, so there is nothing to override.
Since you're not using FastAPI's Depends to get your dependency - FastAPI has no way of returning the alternative function.
In your case you'll need to use a regular mocking library instead, such as unittest.mock or pytest-mock.
I'd also like to point out that initializing a shared dependency as in you've done here by default will share the same instance across all instances of ControllerImp instead of being re-created for each instance of ControllerImp.
The cbv decorator changes things a bit, and as mentioned in the documentation:
For each shared dependency, add a class attribute with a value of type Depends
So to get this to match the FastAPI way of doing things and make the cbv decorator work as you want to:
def get_manager():
return ManagerImp()
#cbv(router)
class ControllerImp(Controller):
manager = Depends(get_manager)
And when you do it this way, you can use dependency_overrides as you planned:
app.dependency_overrides[get_manager] = lambda: return MyFakeManager()
If you only want to replace the create_record function, you'll still have to use regular mocking.
You'll also have to remove the dependency override after the test has finished unless you want it to apply to all tests, so use yield inside your fixture and then remove the override when the fixture starts executing again.
I think you should put your app.dependency_overrides inside the function with #pytest.fixture. Try to put it inside your client().
#pytest.fixture()
def client():
app.dependency_overrides[ManagerImp.create_record] = over_create_record
with TestClient(app) as test_client:
yield test_client
because every test will run the fresh app, meaning it will reset everything from one to another test and only related things bind with the pytest will effect the test.

Mock or avoid cognito authentication and group permissions for pytest

I have a flask app and I'm trying to implement pytest for the services I've built. All of the routes require cognito authentication or cognito group permissions. Is there a way I can mock or avoid cognito? From all the articles I've read online, nothing has helped me so far.
How would a pytest be implemented for the example below?
#app.route('/hello')
#cognito_auth_required
#cognito_group_permissions(["test"])
def hello():
return 'Hello, World'
I found this question while struggling to do the same thing. I was trying to patch out the decorators which was proving fruitless until I did some more digging. It turns out since the syntactic-sugar involving decorators is handled at runtime,you need to patch the library function out with a bypass function then reload your library under test in order for it to work.
I am using unittest rather than pytest. But this example, which returns a dict containing the logged in user's sub (we'll mock this too) should work:
src/controllers/some_module.py:
#cognito_auth_required
def my_example_function() -> dict:
return {'your_sub' : current_cognito_jwt['sub']}
src/test/some_test.py
def mock_cognito_auth_required(fn):
"""
This is a dummy wrapper of #cognito auth required, it passes-through
to the
wrapped fuction without performing any cognito logic.
"""
#wraps(fn)
def decorator(*args, **kwargs):
return fn(*args, **kwargs)
return decorator
def setUpModule():
"""
Patches out the decorator (in the library) and reloads the module
under test.
"""
patch('flask_cognito.cognito_auth_required', mock_cognito_auth_required).start()
reload(some_module)
#patch('src.controllers.some_module.current_cognito_jwt')
class TestSomeModule(unittest.TestCase):
def test_my_example_function(self, mock_current_cognito_jwt):
mock_current_cognito_jwt.__getitem__.return_value = 'test'
response = some_module.my_example_function()
self.assertEqual(response, {'your_sub': 'test'})
You should be able to do similar for cognito_group_permissions.

What if my test cases need data from database, not an empty database?

I recently started to write test codes and I got curious about setting up database for test.
Let's say that my web server has a feature that calculates all the data from users and verify the output.
In this case, I would need lots of original(remaining) data in my database.
But in another case, I need to check if my web server if it redirects user to the right page according to existence of specific data.
This is my test fixture code and as you can see, in db.teardown it drops all data.
#pytest.fixture(scope='session', autouse=True)
def app(request):
settings_override = {
'DEBUG': False,
'TESTING': True,
'SQLALCHEMY_DATABASE_URI': f'{DB_CONFIG["VENDOR"]}://{DB_CONFIG["USER_NAME"]}:{DB_CONFIG["PWD"]}#{DB_CONFIG["HOST"]}/{DB_CONFIG["SCHEMA"]}'
}
app = _app
app.config.update(settings_override)
ctx = app.app_context()
ctx.push()
def teardown():
ctx.pop()
request.addfinalizer(teardown)
return app
#pytest.fixture(scope='session', autouse=True)
def db(app, request):
_db.app = app
_db.create_all()
def teardown():
_db.drop_all()
request.addfinalizer(teardown)
return _db
#pytest.fixture(scope='session', autouse=True)
def client(app):
_client = app.test_client()
def get_user_config():
conf = {
'user_imin': USER_CONFIG['user_imin'],
'user_token': encrypt_imin_random(USER_CONFIG['user_imin']),
}
return conf
_client.application.config.update(get_user_config())
return _client
#pytest.fixture(scope='function', autouse=True)
def session(db, request):
conn = db.engine.connect()
transaction = conn.begin()
options = dict(bind=conn, binds={})
session = db.create_scoped_session(options=options)
db.session = session
def teardown():
transaction.rollback()
conn.close()
session.remove()
request.addfinalizer(teardown)
return session
My data will still be there if I comment out the line but then I can't test some cases that needs to be done with empty dataset.
I thought setting up more fixture to start with so that I can
I think it would be great if there is an option for test case to decide whether it does its job as if dataset is empty or not.
If there isn't, what is the best case to set up test suite in my case?
You can start each test with an empty database. This way your tests are all independent of each other. For the tests which need data, set up the tests by populating the database with data. You probably want to reuse code to do this. You can dump some of the data from a production database to store with your tests to use as a standard test set.

Using #pytest.fixture(scope="module") with #pytest.mark.asyncio

I think the example below is a really common use case:
create a connection to a database once,
pass this connection around to test which insert data
pass the connection to a test which verifies the data.
Changing the scope of #pytest.fixture(scope="module") causes ScopeMismatch: You tried to access the 'function' scoped fixture 'event_loop' with a 'module' scoped request object, involved factories.
Also, the test_insert and test_find coroutine do not need the event_loop argument because the loop is accessible already by passing the connection.
Any ideas how to fix those two issues?
import pytest
#pytest.fixture(scope="function") # <-- want this to be scope="module"; run once!
#pytest.mark.asyncio
async def connection(event_loop):
""" Expensive function; want to do in the module scope. Only this function needs `event_loop`!
"""
conn await = make_connection(event_loop)
return conn
#pytest.mark.dependency()
#pytest.mark.asyncio
async def test_insert(connection, event_loop): # <-- does not need event_loop arg
""" Test insert into database.
NB does not need event_loop argument; just the connection.
"""
_id = 0
success = await connection.insert(_id, "data")
assert success == True
#pytest.mark.dependency(depends=['test_insert'])
#pytest.mark.asyncio
async def test_find(connection, event_loop): # <-- does not need event_loop arg
""" Test database find.
NB does not need event_loop argument; just the connection.
"""
_id = 0
data = await connection.find(_id)
assert data == "data"
The solution is to redefine the event_loop fixture with the module scope. Include that in the test file.
#pytest.fixture(scope="module")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
loop.close()
Similar ScopeMismatch issue was raised in github for pytest-asyncio (link). The solution (below) works for me:
#pytest.yield_fixture(scope='class')
def event_loop(request):
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()

Testing aiohttp client with unittest.mock.patch

I've written a simple HTTP client using aiohttp and I'm trying to test it by patching aiohttp.ClientSession and aiohttp.ClientResponse. However, it appears as though the unittest.mock.patch decorator is not respecting my asynchronous code. At a guess, I would say it's some kind of namespacing mismatch.
Here's a minimal example:
from aiohttp import ClientSession
async def is_ok(url:str) -> bool:
async with ClientSession() as session:
async with session.request("GET", url) as response:
return (response.status == 200)
I'm using an asynchronous decorator for testing, as described in this answer. So here's my attempted test:
import unittest
from unittest.mock import MagicMock, patch
from aiohttp import ClientResponse
from my.original.module import is_ok
class TestClient(unittest.TestCase):
#async_test
#patch("my.original.module.ClientSession", spec=True)
async def test_client(self, mock_client):
mock_response = MagicMock(spec=ClientResponse)
mock_response.status = 200
async def _mock_request(*args, **kwargs):
return mock_response
mock_client.request = mock_response
status = await is_ok("foo")
self.assertTrue(status)
My is_ok coroutine works fine when it's used in, say, __main__, but when I run the test, it gives me an error that indicates that the session.request function has not been mocked per my patch call. (Specifically it says "Could not parse hostname from URL 'foo'", which it should if it weren't mocked.)
I am unable to escape this behaviour. I have tried:
Importing is_ok after the mocking is done.
Various combinations of assigning mocks to mock_client and mock_client.__aenter__, setting mock_client.request to MagicMock(return_value=mock_response), or using mock_client().request, etc.
Writing a mock ClientSession with specific __aenter__ and __aexit__ methods and using it in the new argument to patch.
None of these appear to make a difference. If I put assertions into is_ok to test that ClientSession is an instance of MagicMock, then these assertions fail when I run the test (as, again, they would when the code is not patched). That leads me to my namespacing mismatch theory: That is, the event loop is running in a different namespace to which patch is targeting.
Either that, or I'm doing something stupid!
Mocking ClientSession is discouraged.
Recommended way is creation fake server and sending real requests to it.
Take a look on aiohttp example.

Categories