Can't get my async function to properly work - python

I'm trying to build a script based on data provided by a WebSocket, but I have a tricky problem I can't solve. I have two cells.
The first one:
msg = ''
stream = {}
async def call_api(msg):
async with websockets.connect('wss://www.bitmex.com/realtime?subscribe=quote:XBTUSD,quote:ETHU19') as websocket:
await websocket.send(msg)
while websocket.open:
response = await websocket.recv()
response = json.loads(response)
if 'data' in list(response.keys()):
response = response['data']
for n in range(len(response)):
symbol = response[n]['symbol']
stream[symbol] = response[n]['askPrice']
loop = asyncio.get_event_loop()
loop.create_task(call_api(msg))
The second one:
stream['XBTUSD']
If I run the first cell in Jupyter Notebook and then run the second cell manually afterward, python will print the correct value. But if I press the "restart the current kernel and re-execute the whole notebook" button I get the error KeyError: 'XBTUSD' at the second cell. This error also happens when I run the script with the python shell.
I can't understand the difference in behavior between these two executions.

This is because you created an asyncio task in the first cell but did not wait for it to finish. loop.create_task() immediately returns and let the event loop to continue execution of the created task in background as long as the event loop is alive. (In this case, the event loop keeps running while your notebook kernel is running.) Therefore, loop.create_task() makes your Jupyter notebook to think that the first cell is done immediately.
Note that Jupyter notebook itself also works asynchronously against the kernel process, so if you run the second cell after the first cell too quickly (e.g., using "restart the current kernel and re-execute the whole notebook" instead of manually clicking the Run button), the first cell's asyncio task would not finish before the second cell's execution starts.
To ensure the first cell to actually finish the task before reporting that the cell's execution has finished, use run_until_complete() instead of create_task():
loop = asyncio.get_event_loop()
loop.run_until_complete(call_api(msg))
or, to get additional control over your task with a reference to it:
loop = asyncio.get_event_loop()
t = loop.create_task(call_api(msg))
loop.run_until_complete(t)
If you want to keep the task running in background for an indefinite time, you need a different approach.
Don't use Jupyter notebook and write a daemonized process to continuously fetch and process websocket messages. Jupyter does not provide any means to keep track of background asyncio tasks in the kernel process and execute cells by event triggers from such background tasks. Jupyter notebook is simply not a tool for such patterns.
To decouple websocket message receiver and the processing routines, use an intermediate queue. If both sides run in the same process and the same event loop, you may use asyncio.Queue. If the processing happens in a different thread using synchronous codes, you could try out janus. If the processing happens in a different process, use multiprocessing.Queue or some other IPC mechanisms.

Related

Asyncio - endless running tasks - modifying pymodbus example

I am working with asynchronous PyModbus server with the refreshing task, everything is asyncio-based and based on pymodbus example code, which can be found here:
https://pymodbus.readthedocs.io/en/latest/source/examples.html#updating-server-example
I am not very experienced with asyncio (just a few tutorial and simple experiments which were working correctly but this is my first attempt in creating anything more complicated) and I think I'm missing something.
In the example there is the asyncio.run(...) called in the __main__ part. However, I want to modify this code and therefore I would like to have the server started outside of __main__, something like this:
async def myFunction(args):
# do some other stuff
asyncio.create_task(run_updating_server(run_args))
if __name__ == "__main__":
cmd_args = get_commandline(
server=True,
description="Run asynchronous server.",
)
run_args = setup_updating_server(cmd_args)
asyncio.run(myFunction(run_args), debug=True)
However, this doesn't create a nice, endless running task as in the example, everything is performed just once and that's all, the program finishes.
I don't understand what is the difference and why was the server running endlessly in the example but runs only once in my modification - is there something in create_task() vs run() functionalities that I'm missing?
I have found this topic and tried implementing it with explicit call of the event loop like this:
async def new_main(args):
asyncio.Task(run_updating_server(args))
if __name__ == "__main__":
cmd_args = get_commandline(
server=True,
description="Run asynchronous server.",
)
run_args = setup_updating_server(cmd_args)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
loop.run_until_complete(new_main(run_args))
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
However, in such case I just got Task was destroyed but it is pending! errors...
My second question is: how should I properly implement task to have it endlessly running - the server, the updating function and some other which I want to implement (not related to the modbus server, just running along it and doing their things)? I want to add some tasks to the event loop and have them run endlessly, let's say - one should be executed every 1 second, another one should be triggered by some lock, etc.
I thought that the updating task() in the example was changing the values on the server once every second but after investigating I see that it doesn't - it is just executed once. How can it be modified to behave as mentioned - increment the server values every second?
I guess that's something obvious but lack of experience with asyncio and two days of brainstorming over the whole application made me too dumb to understand what am I missing... I hope you will be albo to guide me to right direction - TIA!
What you are missing is the await keyword. An asyncio task without an await expression is almost always an error.
async def myFunction(args):
# do some other stuff
t = asyncio.create_task(run_updating_server(run_args))
await t
There is a huge difference between this function and the one in your code. Both functions create a task. Your code is then finished. It immediately exits and your program ends. The function given here awaits completion of the newly created task. It doesn't progress past the await expression until the task run_updating_server is complete. The program will not exit until myFunction ends, so the program keeps running until the task finishes. If the task is an infinite loop, the program will run forever.
You say you want to do other things in addition to running this server. Probably each of those other things should be another task, created before the server. You don't have to await on these tasks unless you want one (or more) of them to finish before the server starts. I'm not sure of what else you want to do, but the point of my answer is that your main task has to await something to keep the program from exiting immediately.

Python threading script execution in Flask Backend

Currently i'm trying to use proper threading to execute a bunch of scripts.
They are sorted like that:
Main Thread (Runs the Flask app)
-Analysis Thread (Runs the analysis script which invokes all needed scripts)
-3 different functions executed as thread (Divided in 3 parts so the analysis runs quicker)
My problem is i have a global variable with the analysis thread to be able to determine after the call wether the thread is running or not. The first time it does start and running just fine. Then you can call that endpoint as often as you like it wont do anything because i return a 423 to state that the thread (the analysis) is still running. After all scripts are finished, the if clause with analysis_thread.isAlive() returns false as it should and tries to start the analysis again with analysis_thread.start() but that doesn't work, it throws an exception saying the thread is already active and can't be started twice.
Is there a way to achieve that the script can be started and while it is running it returns another code but when it is finished i can start it again ?
Thanks for reading and for all your help
Christoph
The now hopefully working solution is to never stop the thread and just let it wait.
in the analysis script i have a global variable which indicates the status it is set to False by default.
inside the function it runs two whiles:
while True:
while not thread_status:
time.sleep(30)
execution of the other scripts.
thread_status = False # to ensure the execution runs just once.
I then just set the flag to True from the Controller class so it starts executing

Block jupyter notebook cell execution till specific message received

I'm trying to implement asynchronous, distributed computation engine for python, which is compatible with jupyter notebook. The system is supposed to be based on 'push notification' approach what makes it (almost, I hope) impossible to allows user to wait for specific computation result (i.e. block execution of given notebook cell until message with expected result is delivered). To be precise, I'm trying to:
Add new task to jupyter notebook event loop (the task is periodically checking if specific msg has arrived in while loop, breaks when msg arrived)
Block current cell waiting for the task to be completed.
Still be able to process incoming messages (Using RabbitMQ, Pika, slightly modified code from http://pika.readthedocs.io/en/0.10.0/examples/asynchronous_consumer_example.html)
I have prepared notebooks presenting my problem: https://github.com/SLEEP-MAN/RabbitMQ_jupyterNotebook_asyncio
Any ideas? Is it possible (maybe some IPython/IpyKernel magic ;>?), or I have to change my approach by 180 degree?
Your issue is that you mixed two different loops in one. That is why it didn't work. You need to make few changes.
Use AsyncioConnection instead of TornadoConnection
return adapters.AsyncioConnection(pika.URLParameters(self._url),
self.on_connection_open)
Next you need to remove below line
self._connection.ioloop.start() #throws exception but not a problem...
Because your loop is already started in connect. Then you need use the below code for waiting
loop = asyncio.get_event_loop()
loop.run_until_complete(wait_for_eval())
And now it works

Make sure only one instance of event-handler is triggered at a time in socket.io

I am trying to build a node app which calls python script (takes a lot of time to run).User essentially chooses parameters and then clicks run which triggers event in socket.on('python-event') and this runs python script.I am using sockets.io to send real-time data to the user about the status of the python program using stdout stream I get from python.But the problem I am facing is that if the user clicks run button twice, the event-handdler is triggered twice and runs 2 instances of python script which corrupts stdout.How can I ensure only one event-trigger happens at a time and if new event trigger happens it should kill previous instance and also stdout stream and then run new instance of python script using updated parameters.I tried using socket.once() but it only allows the event to trigger once per connection.
I will use a job queue to do such kind of job, store each job's info in a queue, so you can cancel it and get its status. You can use a node module like kue.

python daemon server crashes during HTML popup overlay callback using asyncio websocket coroutines

My python daemon process stops working when its asyncio run_forever loop listens to websocket calls that originate from a separate run_until_complete asyncio coroutine (or thread) but runs within the same process (PID). More specifically, I code a localhost server in Python 3.4.3 that updates via the webbrowser function an HTML web page in my firefox webbrowser. I then try to capture button presses elicited in a temporary popup window overlay and relay the associated action strings via websocket calls back to the daemonized server.
Things work fine and calls are processed flawlessly in the websocket server embedded in the run_for_ever asyncio loop when the websocket client call comes from an independent non-demonized PID invoked via a command-line call to the same python script. Things also work fine for the websocket server when an HTML-GUI-based websocket call hits the run_for_ever asyncio loop. But things go wrong when an initial asyncio coroutine process requires additional user-input - through a locking HTML window overlay and buttons such as 'accept', 'cancel' or 'quit' - and thereby attempts to capture the button press related websocket string signal through a brief separate run_until_complete asyncio coroutine.
In other words, I try to find a way to control flow through my Python script where intermittently a webbrowser-GUI user-input is required to influence program logic. How can that be achieved in a pure Python solution ?
ok, I found a solution for the problem described above, with two changes :
1) call_soon_threadsafe : this one finally 'isolates' my second asyncio loop so that the first asyncio loop survives when the following line gets invoked :
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe( asyncio.async, websockets.serve( myFunct2, IP, PORT2))
loop.run_forever()
2) I use a separate number for PORT2 for the HTML popup overlay button websocket callback calls which corresponds with the second asyncio websocket loop (see above). In sum, regular GUI callbacks go with the PORT1 number, while the popup GUI calls go with the PORT2 number - for which the second asyncio websocket loop is created temporarily.

Categories