Inner wrapper function never called - python

I'm currently working on a small Telegram bot library that uses PTB under the hood.
One of the syntactic sugar features is this decorator function I use, to ensure code locality.
Unfortunately, the inner wrapper is never called (in my current library version, I have an additional wrapper called #aexec I place below #async_command, but that's not ideal)
def async_command(name: str):
def decorator(func):
updater = PBot.updater
updater.dispatcher.add_handler(CommandHandler(name, func, run_async=True))
def wrapper(update: Update, context: CallbackContext):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(func(update, context))
loop.close()
return wrapper
return decorator
#async_command(name="start")
async def start_command(update: Update, context: CallbackContext) -> None:
update.message.reply_text(text="Hello World")
"def wrapper" inside "async_command" is never called, so the function is never executed.
Any idea how to achieve this without needing an additional decorator to start a new asyncio event loop?
Note: PBot is just a simple class that contains one static "updater" that can be re-used anywhere in the code (PTB uses "updater" instances, which is not ideal for my use cases)
EDIT: I notice that the issue of the inner wrapper not being called only happens on async function. Is this a case of differing calling conventions?

I actually figured it out myself.
I took the existing #aexec and chained the two functions internally, creating a new decorator.
def aexec(func):
def wrapper(update: Update, context: CallbackContext):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(func(update, context))
loop.close()
return wrapper
def _async_command(name: str):
def wrapper(func):
updater = PBot.updater
updater.dispatcher.add_handler(CommandHandler(name, func, run_async=True))
return wrapper
def async_command(name: str):
run_first = _async_command(name)
def wrapper(func):
return run_first(aexec(func))
return wrapper
Now I can use #async_command(name="name_of_command") and it will first call _async_command, then aexec

Related

How to make an unified interface for synchronous and asynchronous code in Python? [duplicate]

When implementing classes that have uses in both synchronous and asynchronous applications, I find myself maintaining virtually identical code for both use cases.
Just as an example, consider:
from time import sleep
import asyncio
class UselessExample:
def __init__(self, delay):
self.delay = delay
async def a_ticker(self, to):
for i in range(to):
yield i
await asyncio.sleep(self.delay)
def ticker(self, to):
for i in range(to):
yield i
sleep(self.delay)
def func(ue):
for value in ue.ticker(5):
print(value)
async def a_func(ue):
async for value in ue.a_ticker(5):
print(value)
def main():
ue = UselessExample(1)
func(ue)
loop = asyncio.get_event_loop()
loop.run_until_complete(a_func(ue))
if __name__ == '__main__':
main()
In this example, it's not too bad, the ticker methods of UselessExample are easy to maintain in tandem, but you can imagine that exception handling and more complicated functionality can quickly grow a method and make it more of an issue, even though both methods can remain virtually identical (only replacing certain elements with their asynchronous counterparts).
Assuming there's no substantial difference that makes it worth having both fully implemented, what is the best (and most Pythonic) way of maintaining a class like this and avoiding needless duplication?
There is no one-size-fits-all road to making an asyncio coroutine-based codebase useable from traditional synchronous codebases. You have to make choices per codepath.
Pick and choose from a series of tools:
Synchronous versions using asyncio.run()
Provide synchronous wrappers around coroutines, which block until the coroutine completes.
Even an async generator function such as ticker() can be handled this way, in a loop:
class UselessExample:
def __init__(self, delay):
self.delay = delay
async def a_ticker(self, to):
for i in range(to):
yield i
await asyncio.sleep(self.delay)
def ticker(self, to):
agen = self.a_ticker(to)
try:
while True:
yield asyncio.run(agen.__anext__())
except StopAsyncIteration:
return
These synchronous wrappers can be generated with helper functions:
from functools import wraps
def sync_agen_method(agen_method):
#wraps(agen_method)
def wrapper(self, *args, **kwargs):
agen = agen_method(self, *args, **kwargs)
try:
while True:
yield asyncio.run(agen.__anext__())
except StopAsyncIteration:
return
if wrapper.__name__[:2] == 'a_':
wrapper.__name__ = wrapper.__name__[2:]
return wrapper
then just use ticker = sync_agen_method(a_ticker) in the class definition.
Straight-up coroutine methods (not generator coroutines) could be wrapped with:
def sync_method(async_method):
#wraps(async_method)
def wrapper(self, *args, **kwargs):
return async.run(async_method(self, *args, **kwargs))
if wrapper.__name__[:2] == 'a_':
wrapper.__name__ = wrapper.__name__[2:]
return wrapper
Factor out common components
Refactor out the synchronous parts, into generators, context managers, utility functions, etc.
For your specific example, pulling out the for loop into a separate generator would minimise the duplicated code to the way the two versions sleep:
class UselessExample:
def __init__(self, delay):
self.delay = delay
def _ticker_gen(self, to):
yield from range(to)
async def a_ticker(self, to):
for i in self._ticker_gen(to):
yield i
await asyncio.sleep(self.delay)
def ticker(self, to):
for i in self._ticker_gen(to):
yield i
sleep(self.delay)
While this doesn't make much of any difference here it can work in other contexts.
Abstract Syntax Tree tranformation
Use AST rewriting and a map to transform coroutines into synchronous code. This can be quite fragile if you are not careful on how you recognise utility functions such as asyncio.sleep() vs time.sleep():
import inspect
import ast
import copy
import textwrap
import time
asynciomap = {
# asyncio function to (additional globals, replacement source) tuples
"sleep": ({"time": time}, "time.sleep")
}
class AsyncToSync(ast.NodeTransformer):
def __init__(self):
self.globals = {}
def visit_AsyncFunctionDef(self, node):
return ast.copy_location(
ast.FunctionDef(
node.name,
self.visit(node.args),
[self.visit(stmt) for stmt in node.body],
[self.visit(stmt) for stmt in node.decorator_list],
node.returns and ast.visit(node.returns),
),
node,
)
def visit_Await(self, node):
return self.visit(node.value)
def visit_Attribute(self, node):
if (
isinstance(node.value, ast.Name)
and isinstance(node.value.ctx, ast.Load)
and node.value.id == "asyncio"
and node.attr in asynciomap
):
g, replacement = asynciomap[node.attr]
self.globals.update(g)
return ast.copy_location(
ast.parse(replacement, mode="eval").body,
node
)
return node
def transform_sync(f):
filename = inspect.getfile(f)
lines, lineno = inspect.getsourcelines(f)
ast_tree = ast.parse(textwrap.dedent(''.join(lines)), filename)
ast.increment_lineno(ast_tree, lineno - 1)
transformer = AsyncToSync()
transformer.visit(ast_tree)
tranformed_globals = {**f.__globals__, **transformer.globals}
exec(compile(ast_tree, filename, 'exec'), tranformed_globals)
return tranformed_globals[f.__name__]
While the above is probably far from complete enough to fit all needs, and transforming AST trees can be daunting, the above would let you maintain just the async version and map that version to synchronous versions directly:
>>> import example
>>> del example.UselessExample.ticker
>>> example.main()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/.../example.py", line 32, in main
func(ue)
File "/.../example.py", line 21, in func
for value in ue.ticker(5):
AttributeError: 'UselessExample' object has no attribute 'ticker'
>>> example.UselessExample.ticker = transform_sync(example.UselessExample.a_ticker)
>>> example.main()
0
1
2
3
4
0
1
2
3
4
async/await is infectious by design.
Accept that your code will have different users — synchronous and asynchronous, and that these users will have different requirements, that over time the implementations will diverge.
Publish separate libraries
For example, compare aiohttp vs. aiohttp-requests vs. requests.
Likewise, compare asyncpg vs. psycopg2.
How to get there
Opt1. (easy) clone implementation, allow them to diverge.
Opt2. (sensible) partial refactor, let e.g. async library depend on and import sync library.
Opt3. (radical) create a "pure" library that can be used both in sync and async program. For example, see https://github.com/python-hyper/hyper-h2 .
On the upside, testing is easier and thorough. Consider how hard (or impossible) it is force the test framework to evaluate all possible concurrent execution orders in an async program. Pure library doesn't need that :)
On the down-side this style of programming requires different thinking, is not always straightforward, and may be suboptimal. For example, instead of await socket.read(2**20) you'd write for event in fsm.push(data): ... and rely on your library user to provide you with data in good-sized chunks.
For context, see the backpressure argument in https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/

Turn a sync function into an awaitable function

I've got a situation where I can't be sure if a function will be sync, or async. In other words, the function has a type signature of Union[Callable[..., Awaitable[Any]], Callable[..., Any]].
I can't seem to find a good (non-deprecated) way of turning the above type signature into a consistent type of Callable[..., Awaitable[Any]].
The only way I was able to find how to do this, is by doing this:
import asyncio
import inspect
from typing import Any, Awaitable, Callable, List, Union
def force_awaitable(function: Union[Callable[..., Awaitable[Any]], Callable[..., Any]]) -> Callable[..., Awaitable[Any]]:
if inspect.isawaitable(function):
# Already awaitable
return function
else:
# Make it awaitable
return asyncio.coroutine(function)
However, asyncio.coroutine (which is normally used as a decorator) is deprecated since Python 3.8. https://docs.python.org/3/library/asyncio-task.html#asyncio.coroutine
The alternative provided does not work for me here, since I don't use asyncio.coroutine as a decorator. Unlike a decorator, the async keyword can't be used as a function.
How do I turn a sync function (not awaitable) into an async function (awaitable)?
Other considered options
Above I've shown that you can detect if something is awaitable. This allows us to change how to call like so:
def my_function():
pass
if inspect.isawaitable(function):
await my_function()
else:
my_function()
However, this feels clunky, can create messy code, and creates unnecessary checks inside big loops. I want to be able to define how to call the function before I'm entering my loop.
In NodeJS I would simply await the sync function:
// Note: Not Python!
function sync() {}
await sync();
When I try to do the same thing in Python, I'm greeted with an error:
def sync():
pass
await sync() # AttributeError: 'method' object has no attribute '__await__'
You can return an async function with the original function object in its scope:
def to_coroutine(f:Callable[..., Any]):
async def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
def force_awaitable(function: Union[Callable[..., Awaitable[Any]], Callable[..., Any]]) -> Callable[..., Awaitable[Any]]:
if inspect.iscoroutinefunction(function):
return function
else:
return to_coroutine(function)
Now, if function is not awaitable, force_awaitable will return a coroutine function that contains function.
def test_sync(*args, **kwargs):
pass
async def main():
await force_awaitable(test_sync)('foo', 'bar')
asyncio.run(main())

Call count not working with async function

I have a function that has a decorator #retry, which retries the function if a certain Exception happened. I want to test that this function executes the correct amount of times, for which I have the following code which is working:
#pytest.mark.asyncio
async def test_redis_failling(mocker):
sleep_mock = mocker.patch.object(retry, '_sleep')
with pytest.raises(ConnectionError):
retry_store_redis()
assert sleep_mock.call_count == 4
#retry(ConnectionError, initial_wait=2.0, attempts=5)
def retry_store_redis():
raise ConnectionError()
But, if I modify retry_store_redis() to be an async function, the return value of sleep_mock.call_count is 0.
So you define "retry" as a function. Then you define a test, then you define some code that uses #retry.
#retry, as a decorator, is being called at import time. So the order of operations is
declare retry
declare test
call retry with retry_store_redis as an argument
start your test
patch out retry
call the function you defined in step 3
so "retry" gets called once (at import time), your mock gets called zero times. To get the behavior you want, (ensuring that retry is actually re-calling the underlying function) I would do
#pytest.mark.asyncio
async def test_redis_failling(mocker):
fake_function = MagicMock(side_effect=ConnectionError)
decorated_function = retry(ConnectionError, initial_wait=2.0, attempts=5)(fake_function)
with pytest.raises(ConnectionError):
decorated_function()
assert fake_function.call_count == 4
if you wanted to test this as built (instead of a test specifically for the decorator) you would have to mock out the original function inside the decorated function- which would depend on how you implemented the decorator. The default way (without any libraries) means you would have to inspect the "closure" attribute. You can build the object to retain a reference to the original function though, here is an example
def wrap(func):
class Wrapper:
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
return self.func(*args, **kwargs)
return Wrapper(func)
#wrap
def wrapped_func():
return 42
in this scenario, you could patch the wrapped function at wrapped_func.func

Decorators: make actions after function is awaited

I use Python 3.7 and have following decorator:
def decorator(success_check: function):
def wrap(func):
async def inner(root, info, **args):
func_result = await func(root, info, **args)
if not success_check(func_result):
pass # do some action
return func(root, info, **args)
return inner
return wrap
In a current implementation func is awaited two times. Can I make it work with func awaited once?
if you call return await func(root, info, **args), or, event better, just do return func_result, most likely, it will solve your issue

Decorating an instance method and calling it from the decorator

I am using nose test generators feature to run the same test with different contexts. Since it requires the following boiler plate for each test:
class TestSample(TestBase):
def test_sample(self):
for context in contexts:
yield self.check_sample, context
def check_sample(self, context):
"""The real test logic is implemented here"""
pass
I decided to write the following decorator:
def with_contexts(contexts=None):
if contexts is None:
contexts = ['twitter', 'linkedin', 'facebook']
def decorator(f):
#wraps(f)
def wrapper(self, *args, **kwargs):
for context in contexts:
yield f, self, context # The line which causes the error
return wrapper
return decorator
The decorator is used in the following manner:
class TestSample(TestBase):
#with_contexts()
def test_sample(self, context):
"""The real test logic is implemented here"""
var1 = self.some_valid_attribute
When the tests executed an error is thrown specifying that the attribute which is being accessed is not available. However If I change the line which calls the method to the following it works fine:
yield getattr(self, f.__name__), service
I understand that the above snippet creates a bound method where as in the first one self is passed manually to the function. However as far as my understanding goes the first snippet should work fine too. I would appreciate if anyone could clarify the issue.
The title of the question is related to calling instance methods in decorators in general but I have kept the description specific to my context.
You can use functools.partial to tie the wrapped function to self, just like a method would be:
from functools import partial
def decorator(f):
#wraps(f)
def wrapper(self, *args, **kwargs):
for context in contexts:
yield partial(f, self), context
return wrapper
Now you are yielding partials instead, which, when called as yieldedvalue(context), will call f(self, context).
As far as I can tell, some things don't fit together. First, your decorator goes like
def with_contexts(contexts=None):
if contexts is None:
contexts = ['twitter', 'linkedin', 'facebook']
def decorator(f):
#wraps(f)
def wrapper(self, *args, **kwargs):
for context in contexts:
yield f, self, context # The line which causes the error
return wrapper
return decorator
but you use it like
#with_contexts
def test_sample(self, context):
"""The real test logic is implemented here"""
var1 = self.some_valid_attribute
This is wrong: this calls with_context(test_sample), but you need with_context()(test_sample). So do
#with_contexts()
def test_sample(self, context):
"""The real test logic is implemented here"""
var1 = self.some_valid_attribute
even if you don't provide the contexts argument.
Second, you decorate the wrong function: your usage shows that the test function yields the check function for each context. The function you want to wrap does the job of the check function, but you have to name it after the test function.
Applying self to a method can be done with partial as Martijn writes, but it can as well be done the way Python does it under the hood: with
method.__get__(self, None)
or maybe better
method.__get__(self, type(self))
you can achieve the same. (Maybe your original version works as well, with yielding the function to be called and the arguments to use. It was not clear to me that this is the way it works.)

Categories