Run a subcommand inside a context manager - python

In the context of a python click CLI application, I would like to run a subcommand inside of a context manager that would be setup in a higher level command. How is it possible to do that with click? My pseudo-code looks something like:
import click
from contextlib import contextmanager
#contextmanager
def database_context(db_url):
try:
print(f'setup db connection: {db_url}')
yield
finally:
print('teardown db connection')
#click.group
#click.option('--db',default='local')
def main(db):
print(f'running command against {db} database')
db_url = get_db_url(db)
connection_manager = database_context(db_url)
# here come the mysterious part that makes all subcommands
# run inside the connection manager
#main.command
def do_this_thing()
print('doing this thing')
#main.command
def do_that_thing()
print('doing that thing')
And this would be called like:
> that_cli do_that_thing
running command against local database
setup db connection: db://user:pass#localdb:db_name
doing that thing
teardown db connection
> that_cli --db staging do_this_thing
running command against staging database
setup db connection: db://user:pass#123.456.123.789:db_name
doing this thing
teardown db connection
Edit: note that the above example is forged to better illustrate the missing functionality of click, not that I want to solve this problem in particular. I know I could repeat the same code in all commands and achieve the same effect, which I already do in my real use case. My question is precisely on what could I do only in the main function, that would run all transparently subcommands in a context manager.

Decorating commands
Define a context manager decorator using contextlib.ContextDecorator
Use click.pass_context decorator on main(), so you can explore click context
Create an instance db_context of the context manager
Iterate on commands defined for group main using ctx.command.commands
For each command, replace the original callback (function called by the command) with the same callback decorated with the context manager db_context(cmd)
This way you will programmatically modify each command to behave just like:
#main.command()
#db_context
def do_this_thing():
print('doing this thing')
But without requiring to change your code beyond your function main().
See the code below for a working example:
import click
from contextlib import ContextDecorator
class Database_context(ContextDecorator):
"""Decorator context manager."""
def __init__(self, db_url):
self.db_url = db_url
def __enter__(self):
print(f'setup db connection: {self.db_url}')
def __exit__(self, type, value, traceback):
print('teardown db connection')
#click.group()
#click.option('--db', default='local')
#click.pass_context
def main(ctx, db):
print(f'running command against {db} database')
db_url = db # get_db_url(db)
# here come the mysterious part that makes all subcommands
# run inside the connection manager
db_context = Database_context(db_url) # Init context manager decorator
for name, cmd in ctx.command.commands.items(): # Iterate over main.commands
cmd.allow_extra_args = True # Seems to be required, not sure why
cmd.callback = db_context(cmd.callback) # Decorate command callback with context manager
#main.command()
def do_this_thing():
print('doing this thing')
#main.command()
def do_that_thing():
print('doing that thing')
if __name__ == "__main__":
main()
It does what you describe in your question, hope it will work as expected in real code.
Using click.pass_context
This code below will give you an idea of how to do it using click.pass_context.
import click
from contextlib import contextmanager
#contextmanager
def database_context(db_url):
try:
print(f'setup db connection: {db_url}')
yield
finally:
print('teardown db connection')
#click.group()
#click.option('--db',default='local')
#click.pass_context
def main(ctx, db):
ctx.ensure_object(dict)
print(f'running command against {db} database')
db_url = db #get_db_url(db)
# Initiate context manager
ctx.obj['context'] = database_context(db_url)
#main.command()
#click.pass_context
def do_this_thing(ctx):
with ctx.obj['context']:
print('doing this thing')
#main.command()
#click.pass_context
def do_that_thing(ctx):
with ctx.obj['context']:
print('doing that thing')
if __name__ == "__main__":
main(obj={})
Another solution to avoid explicit with statement could be passing the context manager as a decorator using contextlib.ContextDecorator, but it would likely be more complex to setup with click.

This use case is supported natively in Click from v8.0 by using
ctx.with_resource(context_manager)
https://click.palletsprojects.com/en/8.0.x/api/#click.Context.with_resource
There is a worked example in the Click advanced documentation
https://click.palletsprojects.com/en/8.0.x/advanced/#managing-resources

Related

How to test operations in a context manager using pytest

I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.

Replacing python with_statement by setUp and tearDown in Unittest

In a test suite I have some code organized as below, context is some persistent object that is deleted when exiting the with block:
class Test(TestCase):
def test_action1(self):
with create_context() as context:
context.prepare_context()
context.action1()
self.assertTrue(context.check1())
def test_action2(self):
with create_context() as context:
context.prepare_context()
context.action2()
self.assertTrue(context.check2())
It's obvious that code has some repetition of setup boilerplate in both tests, hence I would like to use setUp() and tearDown() methods to factorize that boilerplate.
But I don't know how to extract the with_statement. What I came up with is something like this:
class Test(TestCase):
def setUp(self):
self.context = create_context()
self.context.prepare_context()
def tearDown(self):
del self.context()
def test_action1(self):
self.context.action1()
self.assertTrue(self.context.check1())
def test_action2(self):
self.context.action2()
self.assertTrue(self.context.check2())
But I believe this is not really equivalent when test fails, also having to put an explicit delete in tearDown() doesn't feel right.
What is the correct way to change my with_statement code to setUp() and tearDown() style ?
I'm not 100% sure about the setUp() and tearDown(), but the methods defined in context managers __enter__ and __exit__ sound like they do what you want them to do (just with different names):
class ContextTester():
def __enter__(self):
self.context = create_context()
self.context.prepare_context()
return self.context
def __exit(self, exc_type, exc_value, exc_traceback):
self.context.close()
def Test(TestCase):
def test_action1(self):
with ContextTester() as context:
context.action1()
self.assertTrue(context.check1())
You might want to use contextlib.ExitStack for that.
A context manager that is designed to make it easy to programmatically combine other context managers and cleanup functions, especially those that are optional or otherwise driven by input data.
import contextlib
from unittest import TestCase
class Test(TestCase):
def setUp(self) -> None:
stack = contextlib.ExitStack()
self.context = stack.enter_context(create_context()) # create_context is your context manager
self.addCleanup(stack.close)
def test_action1(self):
self.context.prepare_context()
self.context.action1()
self.assertTrue(self.context.check1())
or if you want to have some control over the teardown or use multiple context managers this will be better
import contextlib
from unittest import TestCase
class Test(TestCase):
def setUp(self):
with contextlib.ExitStack() as stack:
self.context = stack.enter_context(create_context()) # create_context is your context manager
self._resource_stack = stack.pop_all()
def tearDown(self):
self._resource_stack.close()
def test_action1(self):
self.context.prepare_context()
self.context.action1()
self.assertTrue(self.context.check1())

Provide contextvars.Context with a ContextManager

I'm trying to manage transactions in my DB framework (I use MongoDB with umongo over pymongo).
To use transaction, one must pass a session kwarg along the whole call chain. I would like to provide a context manager that would isolate the transaction. Only the function at the end of the call chain would need to be aware of the session object.
I found out about context variables and I'm close to something but not totally there.
What I would like to have:
with Transaction():
# Do stuff
d = MyDocument.find_one()
d.attr = 12
d.commit()
Here's what I came up with for now:
s = ContextVar('session', default=None)
class Transaction(AbstractContextManager):
def __init__(self):
self.ctx = copy_context()
# Create a new DB session
session = db.create_session()
# Set session in context
self.ctx.run(s.set, session)
def __exit__(self, *args, **kwargs):
pass
# Adding a run method for convenience
def run(self, func, *args, **kwargs):
self.ctx.run(func, *args, **kwargs)
def func():
d = MyDocument.find_one()
d.attr = 12
d.commit()
with Transaction() as t:
t.run(func)
But I don't have the nice context manager syntax. The point of the context manager would be so say "everyting that's in there should be run in that context".
What I wrote above is not really better than just using a function:
def run_transaction(func, *args, **kwargs):
ctx = copy_context()
session = 12
ctx.run(s.set, session)
ctx.run(func)
run_transaction(func)
Am I on the wrong track?
Am I misusing context variables?
Any other way to achieve what I'm trying to do?
Basically, I'd like to be able to open a context like a context manager
session = ContextVar('session', default=None)
with copy_context() as ctx:
session = db.create_session()
# Do stuff
d = MyDocument.find_one()
d.attr = 12
d.commit()
I'd embed this in a Transaction context manager to manage the session stuff and only keep operations on d in user code.
You can use a contextmanager to create the session and transaction and store the session in the ContextVar for use by other functions.
from contextlib import contextmanager
from contextvars import ContextVar
import argparse
import pymongo
SESSION = ContextVar("session", default=None)
#contextmanager
def transaction(client):
with client.start_session() as session:
with session.start_transaction():
t = SESSION.set(session)
try:
yield
finally:
SESSION.reset(t)
def insert1(client):
client.test.txtest1.insert_one({"data": "insert1"}, session=SESSION.get())
def insert2(client):
client.test.txtest2.insert_one({"data": "insert2"}, session=SESSION.get())
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--url", default="mongodb://localhost:27017")
args = parser.parse_args()
client = pymongo.MongoClient(args.url)
# Create and lear collections, collections must be created outside the transaction
insert1(client)
client.test.txtest1.delete_many({})
insert2(client)
client.test.txtest2.delete_many({})
with transaction(client):
insert1(client)
insert2(client)
for doc in client.test.txtest1.find({}):
print(doc)
for doc in client.test.txtest2.find({}):
print(doc)
if __name__ == "__main__":
main()

How to use coroutine as a pytest fixture?

Is it possible to write pytest fixtures as tornado coroutines? For example, I want to write a fixture for creating a db, like this:
from tornado import gen
import pytest
#pytest.fixture
#gen.coroutine
def get_db_connection():
# set up
db_name = yield create_db()
connection = yield connect_to_db(db_name)
yield connection
# tear down
yield drop_db(db_name)
#pytest.mark.gen_test
def test_something(get_db_connection):
# some tests
Obviously, this fixture does not work as expected, as it is called as a function, not as coroutine. Is there a way to fix it?
After some research, I came out with this solution:
from functools import partial
from tornado import gen
import pytest
#pytest.fixture
def get_db_connection(request, io_loop): # io_loop is a fixture from pytest-tornado
def fabric():
#gen.coroutine
def set_up():
db_name = yield create_db()
connection = yield connect_to_db(db_name)
raise gen.Return(connection)
#gen.coroutine
def tear_down():
yield drop_db(db_name)
request.addfinalizer(partial(io_loop.run_sync, tear_down))
connection = io_loop.run_sync(set_up)
return connection
return fabric
#pytest.mark.gen_test
def test_something(get_db_connection):
connection = get_db_connection() # note brackets
I'm sure, that it can be done cleaner using some pylint magic.
I found this slides very useful.
EDIT: I found out that above method has a limitation. You can't change scope of get_db_connection fixture, as it uses io_loop fixture with scope "function".

Python unit tests run function after all test

I need to test smth on python via ssh. I don't want to make ssh connection for every test, because it is to long, I have written this:
class TestCase(unittest.TestCase):
client = None
def setUp(self):
if not hasattr(self.__class__, 'client') or self.__class__.client is None:
self.__class__.client = paramiko.SSHClient()
self.__class__.client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.__class__.client.connect(hostname=consts.get_host(), port=consts.get_port(), username=consts.get_user(),
password=consts.get_password())
def test_a(self):
pass
def test_b(self):
pass
def test_c(self):
pass
def disconnect(self):
self.__class__.client.close()
and my runner
if __name__ == '__main__':
suite = unittest.TestSuite((
unittest.makeSuite(TestCase),
))
result = unittest.TextTestRunner().run(suite)
TestCase.disconnect()
sys.exit(not result.wasSuccessful())
In this version I get error TypeError: unbound method disconnect() must be called with TestCase instance as first argument (got nothing instead). So how i can call disconnect after all tests pass?
With best regards.
You should use setUpClass and tearDownClass instead, if you want to keep the same connection for all tests. You'll also need to make the disconnect method static, so it belongs to the class and not an instance of the class.
class TestCase(unittest.TestCase):
def setUpClass(cls):
cls.connection = <your connection setup>
#staticmethod
def disconnect():
... disconnect TestCase.connection
def tearDownClass(cls):
cls.disconnect()
you can do it by defining startTestRun,stopTestRun of unittest.TestResult class. setUpClass and tearDownClass are running per test class(per test file) so if you have multiple files this methods will run for each one.
by adding following code to my tests/__init__.py i managed to achieve it. this code runs only once for all tests(regardless of number of test classes and test files).
def startTestRun(self):
"""
https://docs.python.org/3/library/unittest.html#unittest.TestResult.startTestRun
Called once before any tests are executed.
:return:
"""
DockerCompose().start()
setattr(unittest.TestResult, 'startTestRun', startTestRun)
def stopTestRun(self):
"""
https://docs.python.org/3/library/unittest.html#unittest.TestResult.stopTestRun
Called once after all tests are executed.
:return:
"""
DockerCompose().compose.stop()
setattr(unittest.TestResult, 'stopTestRun', stopTestRun)
There seems to be a simple solution for the basic (beginner's) case:
def tmain():
setup()
unittest.main(verbosity=1, exit=False)
clean()
The trick is "exit=False" which lets the function tmain run until the end.
setup() and clean() can be any functions to do what the name implies.

Categories