Im using pytest-asyncio in a project that I'm currently working on.
In this project Im implementing the repository pattern and for tests I code a simple "In memory repository" (ie: dict with pk on keys and entities on values ). This repository is a class with async methods and have the following method:
async def update(self, entity: IEntity) -> IEntity:
try:
if entity.id not in self._storage:
raise KeyError()
self._storage[entity.id] = entity
except KeyError:
raise EntityNotFound(
f'Cant found {self._entity_type} with id: {entity.id}',
_id=entity.id,
)
return entity
And I have the following test:
#pytest.mak.asyncio
async def test_delete_nonexistent_sale(self):
with pytest.raises(EntityNotFound) as e:
await self.service.handle({
'sale_id': '93939393939393', 'salesman_id': self.salesman.id,
})
assert 1 == 2
# Ignore this assert for now, Youll understand soon
where service.handle is another async function that has a await repository.update(pk) on the first line and has not try/catch inside.
The problem is that this pass ( that obviously should fail ) passes even with the assert 1==2. For some reason I cant even use pdb/ipdb.set_trace() after the repository call.
Pytest show me this warning:
purchase_system/tests/test_domain/test_services/test_delete_sale.py::TestDeleteSale::test_delete_nonexistent_sale
/home/tamer.cuba/Documents/purchase-system/purchase_system/tests/test_domain/test_services/test_delete_sale.py:102: RuntimeWarning: coroutine 'DeleteSaleService.handle' was never awaited
self.service.handle(
-- Docs: https://docs.pytest.org/en/stable/warnings.html
How can I propagate de exceptions in tests using pytest-asyncio ?
Related
I am using a library that requires async context(aioboto3).
My issue is that I can't call methods from outside the async with block on my custom S3StreamingFile instance. If I do so, python raises an exception, telling me that HttpClient is None.
I want to access the class methods of S3StreamingFile from an outer function, for example in a API route. I don't want to return anything more(from file_2.py) than the S3StreamingFile class instance to the caller(file_3.py). The aioboto3 related code can't be moved to file_3.py. file_1.py and file_2.py need to contain the aioboto3 related logic.
How can I solve this?
Example of not working code:
# file_1.py
class S3StreamingFile():
def __init__(self, s3_object):
self.s3_object = s3_object
async def size(self):
return await self.s3_object.content_length # raises exception, HttpClient is None
...
# file_2.py
async def get_file():
async with s3.resource(...) as resource:
s3_object = await resource.Object(...)
s3_file = S3StreamingFile(s3_object)
return s3_file
# file_3.py
async def main()
s3_file = await get_file()
size = await s3_file.size() # raises exception, HttpClient is None
Example of working code:
# file_1.py
class S3StreamingFile():
def __init__(self, s3_object):
self.s3_object = s3_object
async def size(self):
return await self.s3_object.content_length
...
# file_2.py
async def get_file():
async with s3.resource(...) as resource:
s3_object = await resource.Object(...)
s3_file = S3StreamingFile(s3_object)
size = await s3_file.size() # works OK here, HttpClient is available
return s3_file
# file_3.py
async def main()
s3_file = await get_file()
I want to access the class methods from an outer function... how do I solve this?
Don't. This library is using async context managers to handle resource acquisition/release. The whole point about the context manager is that things like s3_file.size() only make sense when you have acquired the relevant resource (here the s3 file instance).
But how do you use this data in the rest of your program? In general---since you haven't said what the rest of your program is or why you want this data---there are two approaches:
acquire the resource somewhere else, and then make it available in much larger scopes, or
make your other functions resource-aware.
In the first case, you'd acquire the resource before all the logic runs, and then hold on to it. (This might look like RAII.) This might well make sense in smaller scripts, or when a resource is designed to be held by only one process at a time. It's a poor fit for code which will spend most of its time doing nothing, or has to coexist with other users of the resource. (An extension of this is writing your own code as a context manager, and effectively moving the problem up the calling stack. If each code path only handles one resource, this might well be the way to go.)
In the second, you'd write your higher-level functions to be aware that they're accessing a resource. You might do this by passing the resource itself around:
def get_file(resource: AcquiredResource) -> FileResource:
...
def get_size(thing: AcquirableResource) -> int:
with thing as resource:
s3_file = get_file(resource)
return s3_file.size
(using made-up generic types here to illustrate the point).
Or you might want a static copy of (some) attrs of a particular resource, like a file here, and a step where you build that copy. Personally I would likely store those in a dict or local object to make it clear that I wasn't handling the resource itself.
The basic idea here is that the with block guards access to a potentially difficult-to-acquire resource. That safety is built into the library, but it comes at the cost of having to think about the acquisition and structure it into the flow of your code.
I am having trouble testing the output of a custom exception in pytest.
import pytest
class CustomException(Exception):
def __init__(self, extra_message: str):
self.message = extra_message
super().__init__(self.message)
# Should not need to print this as Exception already does this
# print(self.message)
def test_should_get_capsys_output(capsys):
with pytest.raises(CustomException):
raise CustomException("This should be here.")
out, err = capsys.readouterr()
# This should not be true
assert out == ''
assert err == ''
assert 'This' not in out
This example should not pass as I should be able to assert something came out in the output. If I print(self.message) I end up getting the message printed twice when actually used but only then does capsys collect stdout.
I've also tried with variations of caplog and capfd to no avail. [This SO solution] recommends adding an output to the with pytest.raises(...) as info and testing the info but I would have expected capsys to work as well.
Thank you for your time.
I'm sort of confused by what you're asking?
A python exception will stop execution of the current test, even within pytest. So by raising the exception inside a context manager (with statement) we're simulating/catching it before it gets a chance to stop our current test and go to stderr.
I usually declare a base exception for my modules which does nothing, from that one I derive custom errors that could have additional custom data: AFAIK this is the Right Way™ to use exeptions in Python.
I'm also used to build a human readable message from that custom info and pass it along, so I can refer to that message in error handlers. This is an example:
# this code is meant to be compatible with Python-2.7.x
class MycoolmoduleException(Exception):
'''base Mycoolmodule Exception'''
class TooManyFoo(MycoolmoduleException):
'''got too many Foo things'''
def __init__(self, foo_num):
self.foo_num = foo_num
msg = "someone passed me %d Foos" % foo_num
super(TooManyFoo, self).__init__(msg)
# .... somewhere else ....
try:
do_something()
except Exception as exc:
tell_user(exc.message)
# real world example using Click
#click.command()
#click.pass_context
def foo(ctx):
'''do something'''
try:
# ... try really hard to do something useful ...
except MycoolmoduleException as exc:
click.echo(exc.message, err=True)
ctx.exit(-1)
Now, when I run that code through pylint-2.3.1 it complains about my use of MycoolmoduleException.message:
coolmodule.py:458:19: E1101: Instance of 'MycoolmoduleException' has no 'message' member (no-member)
That kind of code always worked for me (both in Python2 and Python3) and hasattr(exc, 'message') in the same code returns True, so why is pylint complaining? And/or: how could that code be improved?
(NB the same happens if I try to catch the built in Exception instead of my own MycoolmoduleException)
I am testing an async function that might get deadlocked. I tried to add a fixture to limit the function to only run for 5 seconds before raising a failure, but it hasn't worked so far.
Setup:
pipenv --python==3.6
pipenv install pytest==4.4.1
pipenv install pytest-asyncio==0.10.0
Code:
import asyncio
import pytest
#pytest.fixture
def my_fixture():
# attempt to start a timer that will stop the test somehow
asyncio.ensure_future(time_limit())
yield 'eggs'
async def time_limit():
await asyncio.sleep(5)
print('time limit reached') # this isn't printed
raise AssertionError
#pytest.mark.asyncio
async def test(my_fixture):
assert my_fixture == 'eggs'
await asyncio.sleep(10)
print('this should not print') # this is printed
assert 0
--
Edit: Mikhail's solution works fine. I can't find a way to incorporate it into a fixture, though.
Convenient way to limit function (or block of code) with timeout is to use async-timeout module. You can use it inside your test function or, for example, create a decorator. Unlike with fixture it'll allow to specify concrete time for each test:
import asyncio
import pytest
from async_timeout import timeout
def with_timeout(t):
def wrapper(corofunc):
async def run(*args, **kwargs):
with timeout(t):
return await corofunc(*args, **kwargs)
return run
return wrapper
#pytest.mark.asyncio
#with_timeout(2)
async def test_sleep_1():
await asyncio.sleep(1)
assert 1 == 1
#pytest.mark.asyncio
#with_timeout(2)
async def test_sleep_3():
await asyncio.sleep(3)
assert 1 == 1
It's not hard to create decorator for concrete time (with_timeout_5 = partial(with_timeout, 5)).
I don't know how to create texture (if you really need fixture), but code above can provide starting point. Also not sure if there's a common way to achieve goal better.
There is a way to use fixtures for timeout, one just needs to add the following hook into conftest.py.
Any fixture prefixed with timeout must return a number of seconds(int, float) the test can run.
The closest fixture w.r.t scope is chosen. autouse fixtures have lesser priority than explicitly chosen ones. Later one is preferred. Unfortunately order in the function argument list does NOT matter.
If there is no such fixture, the test is not restricted and will run indefinitely as usual.
The test must be marked with pytest.mark.asyncio too, but that is needed anyway.
# Add to conftest.py
import asyncio
import pytest
_TIMEOUT_FIXTURE_PREFIX = "timeout"
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_setup(item: pytest.Item):
"""Wrap all tests marked with pytest.mark.asyncio with their specified timeout.
Must run as early as possible.
Parameters
----------
item : pytest.Item
Test to wrap
"""
yield
orig_obj = item.obj
timeouts = [n for n in item.funcargs if n.startswith(_TIMEOUT_FIXTURE_PREFIX)]
# Picks the closest timeout fixture if there are multiple
tname = None if len(timeouts) == 0 else timeouts[-1]
# Only pick marked functions
if item.get_closest_marker("asyncio") is not None and tname is not None:
async def new_obj(*args, **kwargs):
"""Timed wrapper around the test function."""
try:
return await asyncio.wait_for(
orig_obj(*args, **kwargs), timeout=item.funcargs[tname]
)
except Exception as e:
pytest.fail(f"Test {item.name} did not finish in time.")
item.obj = new_obj
Example:
#pytest.fixture
def timeout_2s():
return 2
#pytest.fixture(scope="module", autouse=True)
def timeout_5s():
# You can do whatever you need here, just return/yield a number
return 5
async def test_timeout_1():
# Uses timeout_5s fixture by default
await aio.sleep(0) # Passes
return 1
async def test_timeout_2(timeout_2s):
# Uses timeout_2s because it is closest
await aio.sleep(5) # Timeouts
WARNING
Might not work with some other plugins, I have only tested it with pytest-asyncio, it definitely won't work if item is redefined by some hook.
I just loved Quimby's approach of marking tests with timeouts. Here's my attempt to improve it, using pytest marks:
# tests/conftest.py
import asyncio
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem: pytest.Function):
"""
Wrap all tests marked with pytest.mark.async_timeout with their specified timeout.
"""
orig_obj = pyfuncitem.obj
if marker := pyfuncitem.get_closest_marker("async_timeout"):
async def new_obj(*args, **kwargs):
"""Timed wrapper around the test function."""
try:
return await asyncio.wait_for(orig_obj(*args, **kwargs), timeout=marker.args[0])
except (asyncio.CancelledError, asyncio.TimeoutError):
pytest.fail(f"Test {pyfuncitem.name} did not finish in time.")
pyfuncitem.obj = new_obj
yield
def pytest_configure(config: pytest.Config):
config.addinivalue_line("markers", "async_timeout(timeout): cancels the test execution after the specified amount of seconds")
Usage:
#pytest.mark.asyncio
#pytest.mark.async_timeout(10)
async def potentially_hanging_function():
await asyncio.sleep(20)
It should not be hard to include this to the asyncio mark on pytest-asyncio, so we can get a syntax like:
#pytest.mark.asyncio(timeout=10)
async def potentially_hanging_function():
await asyncio.sleep(20)
EDIT: looks like there's already a PR for that.
I'm trying to test if the application is retrying.
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
// doing something
VideoManager().convert(gif_url)
return
except Exception as exc:
raise convert_video.retry(exc=exc)
And I'm mocking the test
#patch('src.video_manager.VideoManager.convert')
#patch('requests.post')
def test_retry_failed_task(self, mock_video_manager, mock_requests):
mock_video_manager.return_value= {'webm':'file.webm', 'mp4':'file.mp4', 'ogv' : 'file.ogv', 'snapshot':'snapshot.png'}
mock_video_manager.side_effect = Exception('some error')
server.convert_video.retry = MagicMock()
server.convert_video('gif_url', 'http://www.company.com/webhook?attachment_id=1234')
server.convert_video.retry.assert_called_with(ANY)
And I'm getting this error
TypeError: exceptions must be old-style classes or derived from BaseException, not MagicMock
Which is obvious but I don't know how to do it otherwise to test if the method is being called.
I havn't gotten it to work with just using the built in retry so I have to use a mock with the side effect of the real Retry, this makes it possible to catch it in a test.
I've done it like this:
from celery.exceptions import Retry
from mock import MagicMock
from nose.plugins.attrib import attr
# Set it for for every task-call (or per task below with #patch)
task.retry = MagicMock(side_effect=Retry)
##patch('task.retry', MagicMock(side_effect=Retry)
def test_task(self):
with assert_raises(Retry):
task() # Note, no delay or things like that
# and the task, I don't know if it works without bind.
#Celery.task(bind=True)
def task(self):
raise self.retry()
If anyone knows how I can get rid of the extra step in mocking the Retry "exception" I'd be happy to hear it!
from mock import patch
import pytest
#patch('tasks.convert_video.retry')
#patch('tasks.VideoManager')
def test_retry_on_exception(mock_video_manger, mock_retry):
mock_video_manger.convert.side_effect = error = Exception()
with pytest.raises(Exception):
tasks.convert_video('foo', 'bar')
mock_retry.assert_called_with(exc=error)
you're also missing some stuff in your task:
#celery.task(bind=False, default_retry_delay=30)
def convert_video(gif_url, webhook):
try:
return VideoManager().convert(gif_url)
except Exception as exc:
convert_video.retry(exc=exc)
The answers here didn't help me, so I dived even deeper into celery's code and found a hack that works for me:
def test_celery_retry(monkeypatch):
# so the retry will be eager
monkeypatch.setattr(celery_app.conf, 'task_always_eager', True)
# so celery won't try to raise an error and actually retry
monkeypatch.setattr(celery.app.task.Context, 'called_directly', False)
task.delay()
For me it worked to patch celery.app.task.Task.request. This way I could also simulate later retries (eg. to test that the task is retried multiple times).
Using pytest and unittest.mock.patch() this looks like:
#mock.patch("celery.app.task.Task.request")
def test_celery_task_retry(mock_request):
# Override called_directly so that Task.retry() produces a Retry exception.
mock_request.called_directly = False
# Simulate the 42nd retry.
mock_request.retries = 42
with pytest.raises(celery.exceptions.Retry) as retry_exc:
task()
assert retry_exc.value.when > 0