I would like to know if it's possible to call an async function def get_all_allowed_systems in create_app function so that I have access to the database entries of ALLOWED_SYSTEMS populated by get_all_allowed_systems call. I have a limitation that I can't make create_app as async function.
async def get_all_allowed_systems(app):
global ALLOWED_SYSTEMS
operation = prepare_exec(app.config.get_all_systems_procedure)
ALLOWED_SYSTEMS = (await app['database'].execute(operation)).all()
def create_app():
app = App(config=Config)
app['database'] = AioDatabase(**app.config.dict('db_'))
app['app_database'] = AioDatabase(app.config.app_db_url)
get_all_allowed_systems(app)
print(ALLOWED_SYSTEMS)
In Python 3.7+ you can just use asyncio.run(coroutine())
In earlier versions you have to get the event loop and run from there:
loop = asyncio.get_event_loop()
asyncio.ensure_future(coroutine())
loop.run_forever()
loop.close()
Related
I have an async program based around a set of multiple infinitely-running functions that run simultaneously.
I want to allow users to run a specific amount of specific function duplicates.
A code example of what I have now:
async def run():
await asyncio.gather(
func_1(arg1, arg2),
func_2(arg2),
func_3(arg1, arg3),
)
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
loop.close()
Let's say a user wants to run 2 instances of func_2. I want the core code to look like this:
async def run():
await asyncio.gather(
func_1(arg1, arg2),
func_2(arg2),
func_2(arg2),
func_3(arg1, arg3),
)
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
loop.close()
Any way to elegantly achieve this?
I would handle it this way (if we talk about elegancy).
# A tuple of the available functions
functions = (func_1, func_2, func_3)
async def run(**kwargs):
def _tasks():
for function in functions:
yield from [function() for _ in range(kwargs.get(function.__name__, 1))]
await asyncio.gather(*_tasks())
loop = asyncio.get_event_loop()
loop.run_until_complete(run(func_2=2)) # Run only the func_2 twice
loop.close()
Edit
Interesting. If my func_1, func_2, and func_3 all have different set arguments inside them, how would I pass them in your solution?
If you hold the coroutines of the async functions in the functions tuple, you will not get the expected result. You can have an additional class that will handle it. It takes all the necessary arguments of a particular function and returns its coroutine on performing a call on its object.
class Coroutine:
def __init__(self, function, *args, **kwargs):
self._function = function
self._args = args
self._kwargs = kwargs
self.__name__ = function.__name__
async def __call__(self):
await self._function(*self._args, **self._kwargs)
Also, you will need to change the tuple of functions as follows.
functions = (
Coroutine(func_1, "arg"),
Coroutine(func_2, "arg1", kw1="kw"),
Coroutine(func_3),
)
I have this configuration
async def A():
B()
async def C():
# do stuff
def B():
# do stuff
r = C()
# want to call C() with the same loop and store the result
asyncio.run(A())
I don't really know how to do that. I tried this by reading some solutions on the web:
def B():
task = asyncio.create_task(C())
asyncio.wait_for(task, timeout=30)
result = task.result()
But it doesn't seem to work...
Since asyncio does not allow its event loop to be nested, you can use nest_async library to allow nested use of asyncio.run and asyncio.run_until_complete methods in the current event loop or a new one:
First, install the library:
pip install nest-asyncio
Then, add the following to the beginning of your script:
import nest_asyncio
nest_asyncio.apply()
And the rest of the code:
async def A():
B()
async def C():
# do stuff
def B():
# do stuff
loop = asyncio.get_event_loop()
result = loop.run_until_complete(C())
# want to call C() with the same loop and store the result
asyncio.run(A())
The following code works as expected for the synchronous part but gives me a TypeError for the async call (TypeError: object NoneType can't be used in 'await' expression), presumably because the mock constructor can't properly deal with the spec. How do I properly tell Mockito that it needs to set up an asynchronous mock for async_method ?
class MockedClass():
def sync_method(self):
pass
async def async_method(self):
pass
class TestedClass():
def do_something_sync(self, mocked: MockedClass):
mocked.sync_method()
async def do_something_async(self, mocked: MockedClass):
await mocked.async_method()
#pytest.mark.asyncio
async def test():
my_mock = mock(spec=MockedClass)
tested_class = TestedClass()
tested_class.do_something_sync(my_mock)
verify(my_mock).sync_method()
await tested_class.do_something_async(my_mock) # <- Fails here
verify(my_mock).async_method()
Edit:
For reference, this is how it works with the standard mocks (the behavior that I expect):
In mockito my_mock.async_method() would not return anything useful by default and without further configuration. (T.i. it returns None which is not awaitable.)
What I did in the past:
# a helper function
def future(value=None):
f = asyncio.Future()
f.set_result(value)
return f
# your code
#pytest.mark.asyncio
async def test():
my_mock = mock(spec=MockedClass)
when(my_mock).async_method().thenReturn(future(None)) # fill in whatever you expect the method to return
# ....
I'm trying to use a decorator for a function, while trying to pass a global variable to the decorator. However, I get an error message at the line with the decorator (#) stating that scheduler is not defined...
def wrap_task(scheduler_in):
def inner(task):
try:
task()
except:
logger_sub.exception("Error!!!!")
scheduler_in.shutdown(wait=False)
return inner
#wrap_task(scheduler_in = scheduler)
def print_job():
print("pipeline")
raise FileExistsError
if __name__ == "__main__":
scheduler = BlockingScheduler() # from APScheduler
scheduler.add_job(print_job,'date',id="print_job")
scheduler.add_listener(my_listener,EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
(...)
P.S.: The problem shouldn't be to use scheduler before it's defined, since I also create a listener for this scheduler and the listener itself uses the same shutdown command without any error.
I have some Tornado's coroutine related problem.
There is some python-model A, which have the abbility to execute some function. The function could be set from outside of the model. I can't change the model itself, but I can pass any function I want. I'm trying to teach it to work with Tornado's ioloop through my function, but I couldn't.
Here is the snippet:
import functools
import pprint
from tornado import gen
from tornado import ioloop
class A:
f = None
def execute(self):
return self.f()
pass
#gen.coroutine
def genlist():
raise gen.Return(range(1, 10))
#gen.coroutine
def some_work():
a = A()
a.f = functools.partial(
ioloop.IOLoop.instance().run_sync,
lambda: genlist())
print "a.f set"
raise gen.Return(a)
#gen.coroutine
def main():
a = yield some_work()
retval = a.execute()
raise gen.Return(retval)
if __name__ == "__main__":
pprint.pprint(ioloop.IOLoop.current().run_sync(main))
So the thing is that I set the function in one part of code, but execute it in the other part with the method of the model.
Now, Tornado 4.2.1 gave me "IOLoop is already running" but in Tornado 3.1.1 it works (but I don't know how exactly).
I know next things:
I can create new ioloop but I would like to use existent ioloop.
I can wrap genlist with some function which knows that genlist's result is Future, but I don't know, how to block execution until future's result will be set inside of synchronous function.
Also, I can't use result of a.execute() as an future object because a.execute() could be called from other parts of the code, i.e. it should return list instance.
So, my question is: is there any opportunity to execute asynchronous genlist from the synchronous model's method using current IOLoop?
You cannot restart the outer IOLoop here. You have three options:
Use asynchronous interfaces everywhere: change a.execute() and everything up to the top of the stack into coroutines. This is the usual pattern for Tornado-based applications; trying to straddle the synchronous and asynchronous worlds is difficult and it's better to stay on one side or the other.
Use run_sync() on a temporary IOLoop. This is what Tornado's synchronous tornado.httpclient.HTTPClient does, which makes it safe to call from within another IOLoop. However, if you do it this way the outer IOLoop remains blocked, so you have gained nothing by making genlist asynchronous.
Run a.execute on a separate thread and call back to the main IOLoop's thread for the inner function. If a.execute cannot be made asynchronous, this is the only way to avoid blocking the IOLoop while it is running.
executor = concurrent.futures.ThreadPoolExecutor(8)
#gen.coroutine
def some_work():
a = A()
def adapter():
# Convert the thread-unsafe tornado.concurrent.Future
# to a thread-safe concurrent.futures.Future.
# Note that everything including chain_future must happen
# on the IOLoop thread.
future = concurrent.futures.Future()
ioloop.IOLoop.instance().add_callback(
lambda: tornado.concurrent.chain_future(
genlist(), future)
return future.result()
a.f = adapter
print "a.f set"
raise gen.Return(a)
#gen.coroutine
def main():
a = yield some_work()
retval = yield executor.submit(a.execute)
raise gen.Return(retval)
Say, your function looks something like this:
#gen.coroutine
def foo():
# does slow things
or
#concurrent.run_on_executor
def bar(i=1):
# does slow things
You can run foo() like so:
from tornado.ioloop import IOLoop
loop = IOLoop.current()
loop.run_sync(foo)
You can run bar(..), or any coroutine that takes args, like so:
from functools import partial
from tornado.ioloop import IOLoop
loop = IOLoop.current()
f = partial(bar, i=100)
loop.run_sync(f)