Using #pytest.fixture(scope="module") with #pytest.mark.asyncio - python

I think the example below is a really common use case:
create a connection to a database once,
pass this connection around to test which insert data
pass the connection to a test which verifies the data.
Changing the scope of #pytest.fixture(scope="module") causes ScopeMismatch: You tried to access the 'function' scoped fixture 'event_loop' with a 'module' scoped request object, involved factories.
Also, the test_insert and test_find coroutine do not need the event_loop argument because the loop is accessible already by passing the connection.
Any ideas how to fix those two issues?
import pytest
#pytest.fixture(scope="function") # <-- want this to be scope="module"; run once!
#pytest.mark.asyncio
async def connection(event_loop):
""" Expensive function; want to do in the module scope. Only this function needs `event_loop`!
"""
conn await = make_connection(event_loop)
return conn
#pytest.mark.dependency()
#pytest.mark.asyncio
async def test_insert(connection, event_loop): # <-- does not need event_loop arg
""" Test insert into database.
NB does not need event_loop argument; just the connection.
"""
_id = 0
success = await connection.insert(_id, "data")
assert success == True
#pytest.mark.dependency(depends=['test_insert'])
#pytest.mark.asyncio
async def test_find(connection, event_loop): # <-- does not need event_loop arg
""" Test database find.
NB does not need event_loop argument; just the connection.
"""
_id = 0
data = await connection.find(_id)
assert data == "data"

The solution is to redefine the event_loop fixture with the module scope. Include that in the test file.
#pytest.fixture(scope="module")
def event_loop():
loop = asyncio.get_event_loop()
yield loop
loop.close()

Similar ScopeMismatch issue was raised in github for pytest-asyncio (link). The solution (below) works for me:
#pytest.yield_fixture(scope='class')
def event_loop(request):
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()

Related

PendingRollbackError when accessing test database in FastAPI async test

I'm trying to mimic Django behavior when running tests on FastAPI: I want to create a test database in the beginning of each test, and destroy it in the end. The problem is the async nature of FastAPI is breaking everything. When I did a sanity check and turned everything synchronous, everything worked beautifully. When I try to run things async though, everything breaks. Here's what I have at the moment:
The fixture:
#pytest.fixture(scope="session")
def event_loop():
return asyncio.get_event_loop()
#pytest.fixture(scope="session")
async def session():
sync_test_db = "postgresql://postgres:postgres#postgres:5432/test"
if not database_exists(sync_test_db):
create_database(sync_test_db)
async_test_db = "postgresql+asyncpg://postgres:postgres#postgres:5432/test"
engine = create_async_engine(url=async_test_db, echo=True, future=True)
async with engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
Session = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
async with Session() as session:
def get_session_override():
return session
app.dependency_overrides[get_session] = get_session_override
yield session
drop_database(sync_test_db)
The test:
class TestSomething:
#pytest.mark.asyncio
async def test_create_something(self, session):
data = {"some": "data"}
response = client.post(
"/", json=data
)
assert response.ok
results = await session.execute(select(Something)) # <- This line fails
assert len(results.all()) == 1
The error:
E sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: Task <Task pending name='anyio.from_thread.BlockingPortal._call_func' coro=<BlockingPortal._call_func() running at /usr/local/lib/python3.9/site-packages/anyio/from_thread.py:187> cb=[TaskGroup._spawn.<locals>.task_done() at /usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py:629]> got Future <Future pending cb=[Protocol._on_waiter_completed()]> attached to a different loop (Background on this error at: https://sqlalche.me/e/14/7s2a)
/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/session.py:601: PendingRollbackError
Any ideas what I might be doing wrong?
Check if other statements in your test-cases involving the database might fail before this error is raised.
For me the PendingRollbackError was caused by an InsertionError that was raised by a prior test.
All my tests were (async) unit tests that involved database insertions into a postgres database.
After the tests, the database session was supposed to do a rollback of its entries.
The InsertionError was caused by Insertions to the database that failed a unique constraint. All subsequent tests raised the PendingRollbackError.

How to correctly mock and test for async method call

I have a Server class that is responsible for spinning up an asyncio loop. A simplified outline is present in the code example below.
I am writing a test (with pytest) to mock and ensure that the Server calls a specific method when it starts to run. The test succeeds, but I get a Runtime warning that the mock was never awaited.
test_async_func_call.py::test_that_poll_is_called
/Users/subhashb/.pyenv/versions/3.9.4/lib/python3.9/asyncio/events.py:80: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited
self._context.run(self._callback, *self._args)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
What is the correct way to test the method call? I could avoid this entirely and write an integration test, but I am curious to understand how to await a mock.
Code example:
import asyncio
from mock import patch
class Server:
def __init__(self, test_mode=False):
self.loop = asyncio.get_event_loop()
self.test_mode = test_mode
async def poll(self):
print("Polling...")
self.loop.call_later(0.5, self.poll)
def run(self):
self.loop.call_soon(self.poll)
if self.test_mode:
self.loop.call_soon(self.loop.stop)
self.loop.run_forever()
#patch.object(Server, "poll")
def test_that_poll_is_called(mock):
Server(test_mode=True).run()
mock.assert_called_once()

sqlalchemy when does an object become "not persistent"

I have a function that has a semi-long running session that I use for a bunch of database rows... and at a certain point I want to reload or "refresh" one of the rows to make sure none of the state has changed. most of the time this code works fine, but every now and then I get this error
sqlalchemy.exc.InvalidRequestError: Instance '<Event at 0x58cb790>' is not persistent within this Session
I've been reading up on state but cannot understand why an object would stop being persistent? I'm still within a session, so I'm not sure why I would stop being persistent.
Can someone explain what could cause my object to be "not persistent" within the session? I'm not doing any writing to the object prior to this point.
db_event below is the object that is becoming "not persistent"
async def event_white_check_mark_handler(
self: Events, ctx, channel: TextChannel, member: discord.Member, message: Message
):
"""
This reaction is for completing an event
"""
session = database_objects.SESSION()
try:
message_id = message.id
db_event = self.get_event(session, message_id)
if not db_event:
return
logger.debug(f"{member.display_name} wants to complete an event {db_event.id}")
db_guild = await db.get_or_create(
session, db.Guild, name=channel.guild.name, discord_id=channel.guild.id
)
db_member = await db.get_or_create(
session,
db.Member,
name=member.name,
discord_id=member.id,
nick=member.display_name,
guild_id=db_guild.discord_id,
)
db_scheduler_config: db.SchedulerConfig = (
session.query(db.SchedulerConfig)
.filter(db.SchedulerConfig.guild_id == channel.guild.id)
.one()
)
# reasons to not complete the event
if len(db_event) == 0:
await channel.send(
f"{member.display_name} you cannot complete an event with no one on it!"
)
elif (
db_member.discord_id == db_event.creator_id
or await db_scheduler_config.check_permission(
ctx, db_event.event_name, member, db_scheduler_config.MODIFY
)
):
async with self.EVENT_LOCKS[db_event.id]:
session.refresh(db_event) ########### <---- right here is when I get the error thrown
db_event.status = const.COMPLETED
session.commit()
self.DIRTY_EVENTS.add(db_event.id)
member_list = ",".join(
filter(
lambda x: x not in const.MEMBER_FIELD_DEFAULT,
[str(x.mention) for x in db_event.members],
)
)
await channel.send(f"Congrats on completing a event {member_list}!")
logger.info(f"Congrats on completing a event {member_list}!")
# await self.stop_tracking_event(db_event)
del self.REMINDERS_BY_EVENT_ID[db_event.id]
else:
await channel.send(
f"{member.display_name} you did not create this event and do not have permission to delete the event!"
)
logger.warning(f"{member.display_name} you did not create this event!")
except Exception as _e:
logger.error(format_exc())
session.rollback()
finally:
database_objects.SESSION.remove()
I am fairly certain that the root cause in this case is a race condition. Using a scoped session in its default configuration manages scope based on the thread only. Using coroutines on top can mean that 2 or more end up sharing the same session, and in case of event_white_check_mark_handler they then race to commit/rollback and to remove the session from the scoped session registry, effectively closing it and expunging all remaining instances from the now-defunct session, making the other coroutines unhappy.
A solution is to not use scoped sessions at all in event_white_check_mark_handler, because it fully manages its session's lifetime, and seems to pass the session forward as an argument. If on the other hand there are some paths that use the scoped session database_objects.SESSION instead of receiving the session as an argument, define a suitable scopefunc when creating the registry:
https://docs.sqlalchemy.org/en/13/orm/contextual.html#using-custom-created-scopes
SQLAlchemy+Tornado: How to create a scopefunc for SQLAlchemy's ScopedSession?
Correct usage of sqlalchemy scoped_session with python asyncio
I experienced this issue when retrieving a session from a generator, and try to run the exact same query again from different yielded sessions:
SessionLocal = sessionmaker(bind=engine, class_=Session)
def get_session() -> Generator:
with SessionLocal() as session:
yield session
The solution was to use session directly (in my case).
Perhaps in your case I would commit the session, before executing a new query.
def get_data():
with Session(engine) as session:
statement = select(Company)
results = session.exec(statement)

How do I use aio-pika with FastAPI?

I want to create a REST service using FastAPI and aio-pika while working asynchronously. For other async database drivers, I could create clients on startup, when get them in route handlers. For example, with motor I would declare simple connection manager:
from motor.motor_asyncio import AsyncIOMotorClient
class Database:
client: AsyncIOMotorClient = None
db = Database()
async def connect_to_mongo():
db.client = AsyncIOMotorClient("mongo:27017")
async def close_mongo_connection():
db.client.close()
async def get_mongo_client() -> AsyncIOMotorClient:
return db.client
Then add couple of handlers:
app.add_event_handler("startup", connect_to_mongo)
app.add_event_handler("shutdown", close_mongo_connection)
and then just use get_mongo_client to get one to my handler.
Problem here is that aio-pika needs asyncio loop to function. Here is an example from the docs:
connection = await aio_pika.connect_robust(
"amqp://guest:guest#127.0.0.1/", loop=loop
)
And with FastAPI I don't have asyncio loop. Are there any way to use it with interface like in example? Can I just create new loop using asyncio.get_event_loop() and pass it to the connect_robust without really using it anywhere? Like this:
connection = await aio_pika.connect_robust(
"amqp://guest:guest#127.0.0.1/", loop=asyncio.get_event_loop()
)
Ok, so, according to the docs, I can just use connect instead of connect_robust:
connection = await aio_pika.connect(
"amqp://guest:guest#127.0.0.1/"
)

how to return value from tornado.ioloop.IOLoop instance add_future method

I'm using RethinkDB with Tornado with an async approach. Base on my data model in RethinkDB, upon inserting a record for my topic table, I also have to update/insert new record into user_to_topic table. Here's the basic set up of my post request handler.
class TopicHandler(tornado.web.RequestHandler):
def get(self):
pass
#gen.coroutine
def post(self, *args, **kwargs):
# to establish databse connection
connection = rth.connect(host='localhost', port=00000, db=DATABASE_NAME)
# Thread the connection
threaded_conn = yield connection
title = self.get_body_argument('topic_title')
# insert into table
new_topic_record = rth.table('Topic').insert({
'title': topic_title,
}, conflict="error",
durability="hard").run(threaded_conn)
# {TODO} for now assume the result is always written successfully
io_loop = ioloop.IOLoop.instance()
# I want to return my generated_keys here
io_loop.add_future(new_topic_record, self.return_written_record_id)
# do stuff with the generated_keys here, how do I get those keys
def return_written_record_id(self, f):
_result = f.result()
return _result['generated_keys']
after the insert operation finishes, the Future object with return result from RethinkDB insert operation through Future.result() method which I can retrieve id of new record using generated_keys attribute of result. According to Tornado document, I can do stuff with the result in my callback function, return_written_record_id. Of course I can do all the other database operations inside my return_written_record_id function, but is it possible to return all the ids to my post function? Or this is the way it has to be using coroutine in tornado?
Any suggestion will be appreciated. Thank you!
Simply do:
result = yield new_topic_record
Whenever you yield a Future inside a coroutine, Tornado pauses the coroutine until the Future is resolved to a value or exception. Then Tornado resumes the coroutine by passing the value in or raising an exception at the "yield" expression.
For more info, see Refactoring Tornado Coroutines.

Categories