I am working with asynchronous PyModbus server with the refreshing task, everything is asyncio-based and based on pymodbus example code, which can be found here:
https://pymodbus.readthedocs.io/en/latest/source/examples.html#updating-server-example
I am not very experienced with asyncio (just a few tutorial and simple experiments which were working correctly but this is my first attempt in creating anything more complicated) and I think I'm missing something.
In the example there is the asyncio.run(...) called in the __main__ part. However, I want to modify this code and therefore I would like to have the server started outside of __main__, something like this:
async def myFunction(args):
# do some other stuff
asyncio.create_task(run_updating_server(run_args))
if __name__ == "__main__":
cmd_args = get_commandline(
server=True,
description="Run asynchronous server.",
)
run_args = setup_updating_server(cmd_args)
asyncio.run(myFunction(run_args), debug=True)
However, this doesn't create a nice, endless running task as in the example, everything is performed just once and that's all, the program finishes.
I don't understand what is the difference and why was the server running endlessly in the example but runs only once in my modification - is there something in create_task() vs run() functionalities that I'm missing?
I have found this topic and tried implementing it with explicit call of the event loop like this:
async def new_main(args):
asyncio.Task(run_updating_server(args))
if __name__ == "__main__":
cmd_args = get_commandline(
server=True,
description="Run asynchronous server.",
)
run_args = setup_updating_server(cmd_args)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
loop.run_until_complete(new_main(run_args))
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
However, in such case I just got Task was destroyed but it is pending! errors...
My second question is: how should I properly implement task to have it endlessly running - the server, the updating function and some other which I want to implement (not related to the modbus server, just running along it and doing their things)? I want to add some tasks to the event loop and have them run endlessly, let's say - one should be executed every 1 second, another one should be triggered by some lock, etc.
I thought that the updating task() in the example was changing the values on the server once every second but after investigating I see that it doesn't - it is just executed once. How can it be modified to behave as mentioned - increment the server values every second?
I guess that's something obvious but lack of experience with asyncio and two days of brainstorming over the whole application made me too dumb to understand what am I missing... I hope you will be albo to guide me to right direction - TIA!
What you are missing is the await keyword. An asyncio task without an await expression is almost always an error.
async def myFunction(args):
# do some other stuff
t = asyncio.create_task(run_updating_server(run_args))
await t
There is a huge difference between this function and the one in your code. Both functions create a task. Your code is then finished. It immediately exits and your program ends. The function given here awaits completion of the newly created task. It doesn't progress past the await expression until the task run_updating_server is complete. The program will not exit until myFunction ends, so the program keeps running until the task finishes. If the task is an infinite loop, the program will run forever.
You say you want to do other things in addition to running this server. Probably each of those other things should be another task, created before the server. You don't have to await on these tasks unless you want one (or more) of them to finish before the server starts. I'm not sure of what else you want to do, but the point of my answer is that your main task has to await something to keep the program from exiting immediately.
Related
So basically, i have made a tkinter app that has a reminder utiliy in specific to generate notifications at the scheduled time. Everything works fine until I run the app module and another module having the notification generating function one at a time , but when I call the notification generating function intto the app module, my app doesnt work but the notification works. I want the app to run such that the notification generating function kind of runs in the background until the app module is open.
github link: https://github.com/click-boom/Trella
Looking into chatgpt i found terms like threading and multiprocessing, but i have no concept of that and still tried but didnt work.
Sure enough what you are looking for is multithreading.
Here is a simple example of how multithreading works (sorry for my lack of drawing skills).
This is how all monothread programs work. In most programming languages this is the default behaviour.
So in this example Second Task will have to wait for First Task to complete.
If you want several tasks to run concurrently, you can use multithreading.
This is how you could implement this in Python.
Monothreading:
from time import sleep
def firstTask():
time = 10
for i in range(time):
sleep(1)
print(f'I have been running for {i}s')
def secondTask():
print('All I want to do is run once')
firstTask()
secondTask()
Here, secondTask will only run after firstTask is done (i.e after 10 seconds).
Multithreading:
from threading import Thread
from time import sleep
def firstTask():
time = 10
for i in range(time):
sleep(1)
print(f'I have been running for {i}s')
def secondTask():
print('All I want to do is run once')
first_thread = Thread(target=firstTask)
second_thread = Thread(target=secondTask)
first_thread.start()
second_thread.start()
I hope this will be a help to someone !
I'm creating a Discord bot to play poker. I've a function wait_for_betting_to_end which is defined like this:
def wait_for_betting_to_end():
while not_all_players_have_bet():
pass
I've a command poker inside a cog which contains the following code fragment:
#commands.command(name='poker')
async def poker(self, ctx):
self.game_started = True
# ...
await preflop()
wait_for_betting_to_end()
# ...
I've a Discord command bet inside a cog:
#commands.command(name='bet')
async def bet(self, ctx, amt):
if not self.game_started:
print("You're not playing a game.")
return
# does something that would make wait_for_betting_to_end stop
The problem is that the user is never able to run the bet command while playing poker; the execution flow remains stuck in wait_for_betting_to_end forever. While not playing, bet correctly displays the error and exists.
How can I fix this?
The problem with your code is that you make an infite loop in your wait_for_betting_to_end() function. This is a mistake stemming from the thought that discord.py uses multithreading to get its asynchronous functionality(I can guess that much looking at the tags), however it doesn't. asyncio works in a single thread, where it does a task (like receiving a message, processing a command, etc) and does the next task on completion of that one, the power of asyncio stems from the ability to temporarily 'freeze' a task when no progress can be made (like waiting for a response or just plain sleeping) to continue on to the next task, and resuming the frozen task when it needs to. (I'm pretty sure 'freezing' is completely wrong terminology btw). Every command that your bot gets is handled in its own task. You making the infinite loop never release the poker task, so it's blocking the whole loop.
To fix your problem there are a few possibilities, like:
Instead of just infinitely looping, call await asyncio.sleep(0.1) instead of pass in your loop, this will allow your bot to get messages from discord in that time, and thus react to responses from your users. To stop your while loop you could use a self.value which you set to False when it needs be(in your bet command) (maybe use something like a dictionary with games for this as you probably want to run different games at the same time).
I don't really work with cog so I can't confidently give you a worked out example how you could do your code (at least not without risking missing some thing that can easily be done with cogs). But this should put you on the right path I believe.
I'm trying to build a script based on data provided by a WebSocket, but I have a tricky problem I can't solve. I have two cells.
The first one:
msg = ''
stream = {}
async def call_api(msg):
async with websockets.connect('wss://www.bitmex.com/realtime?subscribe=quote:XBTUSD,quote:ETHU19') as websocket:
await websocket.send(msg)
while websocket.open:
response = await websocket.recv()
response = json.loads(response)
if 'data' in list(response.keys()):
response = response['data']
for n in range(len(response)):
symbol = response[n]['symbol']
stream[symbol] = response[n]['askPrice']
loop = asyncio.get_event_loop()
loop.create_task(call_api(msg))
The second one:
stream['XBTUSD']
If I run the first cell in Jupyter Notebook and then run the second cell manually afterward, python will print the correct value. But if I press the "restart the current kernel and re-execute the whole notebook" button I get the error KeyError: 'XBTUSD' at the second cell. This error also happens when I run the script with the python shell.
I can't understand the difference in behavior between these two executions.
This is because you created an asyncio task in the first cell but did not wait for it to finish. loop.create_task() immediately returns and let the event loop to continue execution of the created task in background as long as the event loop is alive. (In this case, the event loop keeps running while your notebook kernel is running.) Therefore, loop.create_task() makes your Jupyter notebook to think that the first cell is done immediately.
Note that Jupyter notebook itself also works asynchronously against the kernel process, so if you run the second cell after the first cell too quickly (e.g., using "restart the current kernel and re-execute the whole notebook" instead of manually clicking the Run button), the first cell's asyncio task would not finish before the second cell's execution starts.
To ensure the first cell to actually finish the task before reporting that the cell's execution has finished, use run_until_complete() instead of create_task():
loop = asyncio.get_event_loop()
loop.run_until_complete(call_api(msg))
or, to get additional control over your task with a reference to it:
loop = asyncio.get_event_loop()
t = loop.create_task(call_api(msg))
loop.run_until_complete(t)
If you want to keep the task running in background for an indefinite time, you need a different approach.
Don't use Jupyter notebook and write a daemonized process to continuously fetch and process websocket messages. Jupyter does not provide any means to keep track of background asyncio tasks in the kernel process and execute cells by event triggers from such background tasks. Jupyter notebook is simply not a tool for such patterns.
To decouple websocket message receiver and the processing routines, use an intermediate queue. If both sides run in the same process and the same event loop, you may use asyncio.Queue. If the processing happens in a different thread using synchronous codes, you could try out janus. If the processing happens in a different process, use multiprocessing.Queue or some other IPC mechanisms.
I have a program that constantly runs if it receives an input, it'll do a task then go right back to awaiting input. I'm attempting to add a feature that will ping a gaming server every 5 minutes, and if the results every change, it will notify me. Problem is, if I attempt to implement this, the program halts at this function and won't go on to the part where I can then input. I believe I need multithreading/multiprocessing, but I have no experience with that, and after almost 2 hours of researching and wrestling with it, I haven't been able to figure it out.
I have tried to use the recursive program I found here but haven't been able to adapt it properly, but I feel this is where I was closest. I believe I can run this as two separate scripts, but then I have to pipe the data around and it would become messier. It would be best for the rest of the program to keep everything on one script.
'''python
def regular_ping(IP):
last_status = None
while True:
present_status = ping_status(IP) #ping_status(IP) being another
#program that will return info I
#need
if present_status != last_status:
notify_output(present_status) #notify_output(msg) being a
#program that will notify me of
# a change
last_status = present_status
time.sleep(300)
'''
I would like this bit of code to run on its own, notifying me of a change (if there is one) every 5 minutes, while the rest of my program also runs and accepts inputs. Instead, the program stops at this function and won't run past it. Any help would be much appreciated, thanks!
You can use a thread or a process for this. But since this is not a CPU bound operation, overhead of dedicating a process is not worth it. So a thread would be enough. You can implement it as follows:
import threading
thread = threading.Thread(target=regular_ping, args=(ip,))
thread.start()
# Rest of the program
thread.join()
Currently i'm trying to use proper threading to execute a bunch of scripts.
They are sorted like that:
Main Thread (Runs the Flask app)
-Analysis Thread (Runs the analysis script which invokes all needed scripts)
-3 different functions executed as thread (Divided in 3 parts so the analysis runs quicker)
My problem is i have a global variable with the analysis thread to be able to determine after the call wether the thread is running or not. The first time it does start and running just fine. Then you can call that endpoint as often as you like it wont do anything because i return a 423 to state that the thread (the analysis) is still running. After all scripts are finished, the if clause with analysis_thread.isAlive() returns false as it should and tries to start the analysis again with analysis_thread.start() but that doesn't work, it throws an exception saying the thread is already active and can't be started twice.
Is there a way to achieve that the script can be started and while it is running it returns another code but when it is finished i can start it again ?
Thanks for reading and for all your help
Christoph
The now hopefully working solution is to never stop the thread and just let it wait.
in the analysis script i have a global variable which indicates the status it is set to False by default.
inside the function it runs two whiles:
while True:
while not thread_status:
time.sleep(30)
execution of the other scripts.
thread_status = False # to ensure the execution runs just once.
I then just set the flag to True from the Controller class so it starts executing