Passing values between thread.timer runs - python

I'm scratching my head here, not sure if this is the right way to approach it but as of right now I can't think of another way(open to suggestions). So I am running the BACpypes library, which requires you to create a device, application then call the run() which initiates the the device on the network.
What I am trying to do is send a write_property command to the device every couple of minutes but the problem is I can only do so after i call the run() method(which initializes and hosts the device), which as soon as I do, nothing beyond that method gets called until I stop the program completely because it's a single threaded application
So I thought I'd create a method called Update which will run every 30 seconds and try and write to the device using thread.Timer(since it then runs on a seperate thread). The issue I'm having is that the Update method I use to write to the device can't be executed until I run the run() command, but I have to execute my method before the run() command otherwise it will never execute. Basically what I want to know is can I send a bool to my Update method that will prevent it from running the write_property the first time so that it can wait till run() has been executed, then every time after that it can try to write to it? Perhaps just add a try/catch and skip ?
example of what the code looks like: (this is my main try block)
isFirstRun = False
try:
test_device = LocalDeviceObject(...)
this_application = Application(test_device, args.ini.address)
Update(None,this_application, isFirstRun)
run()
Update method:
def Update(client, app, isFirstRun):
threading.Timer(30.0, Update, [client,app, isFirstRun]).start()
if the run() method hasnt been called yet
skip
else if it has
execute rest of code

Instead of calling Update directly, why not call threading.Timer(...) in your main thread as well? That way Update won't be run initially until after 30 seconds have passed which appears to be the same as what you are doing with a the booleans, but a lot less clunky.

Related

How to run tasks periodically without interrupting the whole program

I have a program that constantly runs if it receives an input, it'll do a task then go right back to awaiting input. I'm attempting to add a feature that will ping a gaming server every 5 minutes, and if the results every change, it will notify me. Problem is, if I attempt to implement this, the program halts at this function and won't go on to the part where I can then input. I believe I need multithreading/multiprocessing, but I have no experience with that, and after almost 2 hours of researching and wrestling with it, I haven't been able to figure it out.
I have tried to use the recursive program I found here but haven't been able to adapt it properly, but I feel this is where I was closest. I believe I can run this as two separate scripts, but then I have to pipe the data around and it would become messier. It would be best for the rest of the program to keep everything on one script.
'''python
def regular_ping(IP):
last_status = None
while True:
present_status = ping_status(IP) #ping_status(IP) being another
#program that will return info I
#need
if present_status != last_status:
notify_output(present_status) #notify_output(msg) being a
#program that will notify me of
# a change
last_status = present_status
time.sleep(300)
'''
I would like this bit of code to run on its own, notifying me of a change (if there is one) every 5 minutes, while the rest of my program also runs and accepts inputs. Instead, the program stops at this function and won't run past it. Any help would be much appreciated, thanks!
You can use a thread or a process for this. But since this is not a CPU bound operation, overhead of dedicating a process is not worth it. So a thread would be enough. You can implement it as follows:
import threading
thread = threading.Thread(target=regular_ping, args=(ip,))
thread.start()
# Rest of the program
thread.join()

How can you skip a loop iteration in python if a function called inside the loop takes too long to execute?

I want to loop over a set of files and perform an operation on each of them which is specified in "runthingy()". However, because this operation gets stuck on some of those files stopping the entire program, i want to skip this particular file if it takes longer than 120 seconds to complete. I am using Windows which is why signal.SIGALARM is not available, so I am using the stopit library (https://pypi.org/project/stopit/) instead. The following example code will abort the while loop and print timeout after 3 seconds:
with stopit.ThreadingTimeout(3) as to_ctx_mrg:
while(True):
continue
if to_ctx_mrg.state == to_ctx_mrg.TIMED_OUT:
print("Time out")
However, using it in this context will never print out time out if the runthingy() function gets stuck/takes forever to complete:
for filename in os.listdir(os.getcwd()+"\\files\\"):
with stopit.ThreadingTimeout(120) as to_ctx_mrg:
runthingy(filename)
if to_ctx_mrg.state == to_ctx_mrg.TIMED_OUT:
print("Time out")
continue
I don't have experience of the library you are using but it says it raises an asynchronous exception in the timed out thread.
The question is why your function gets 'stuck'? The Python interpreter will only detect that an exception has been raised when it is interpreting Python instructions within that thread. If the reason your function sticks is that it has made a C call that hasn't returned then other Python threads can still probably run, but they won't be able to interrupt the remote thread.
You need to look more closely at why 'runthingy()' blocks. Is it perhaps reading from a socket, or waiting for a file lock? If the call that blocks has an optional timeout then make sure the timeout parameter is set fairly low: even if the code just retries the call after a timeout it at least gives the Python interpreter a chance to get in there and abort the process.
Better still, if you can find out why the function sticks you may be able to fix the underlying problem instead of applying a brute force timeout.

Python threading script execution in Flask Backend

Currently i'm trying to use proper threading to execute a bunch of scripts.
They are sorted like that:
Main Thread (Runs the Flask app)
-Analysis Thread (Runs the analysis script which invokes all needed scripts)
-3 different functions executed as thread (Divided in 3 parts so the analysis runs quicker)
My problem is i have a global variable with the analysis thread to be able to determine after the call wether the thread is running or not. The first time it does start and running just fine. Then you can call that endpoint as often as you like it wont do anything because i return a 423 to state that the thread (the analysis) is still running. After all scripts are finished, the if clause with analysis_thread.isAlive() returns false as it should and tries to start the analysis again with analysis_thread.start() but that doesn't work, it throws an exception saying the thread is already active and can't be started twice.
Is there a way to achieve that the script can be started and while it is running it returns another code but when it is finished i can start it again ?
Thanks for reading and for all your help
Christoph
The now hopefully working solution is to never stop the thread and just let it wait.
in the analysis script i have a global variable which indicates the status it is set to False by default.
inside the function it runs two whiles:
while True:
while not thread_status:
time.sleep(30)
execution of the other scripts.
thread_status = False # to ensure the execution runs just once.
I then just set the flag to True from the Controller class so it starts executing

Is it possible to create "cron" jobs with Python?

I would like to give the user the possibility to have a function being executed every 30 minutes and also be able to stop this whenever he wants. (The user interacts with my application with a web frontend.)
How would one do so with Python?
What I thought of
One possibility I thought of is subprocess + infinite loop with time.sleep:
The Python function gets it's own script whatever.py which has a command line parameter stop_filename
As soon as the user wants to start this "cron" job, subprocess creates a new process of whatever.py with a stop_filename = "kill_job/{}".format(uuid.uuid4())
When the user wants to stop the process, the file stop_filename is created. The process always checks if this file exists before the function is executed and termines if the file does exist.
I store the generated stop_filename in the database for each process so that the user only needs to know which "cron job" he wants to kill.
Although this will work, there are a couple of things I don't like about it:
The process killing might take 30 minutes
The process could be dead before and I don't know how to check that
It seems to be too complicated.

Make sure only one instance of event-handler is triggered at a time in socket.io

I am trying to build a node app which calls python script (takes a lot of time to run).User essentially chooses parameters and then clicks run which triggers event in socket.on('python-event') and this runs python script.I am using sockets.io to send real-time data to the user about the status of the python program using stdout stream I get from python.But the problem I am facing is that if the user clicks run button twice, the event-handdler is triggered twice and runs 2 instances of python script which corrupts stdout.How can I ensure only one event-trigger happens at a time and if new event trigger happens it should kill previous instance and also stdout stream and then run new instance of python script using updated parameters.I tried using socket.once() but it only allows the event to trigger once per connection.
I will use a job queue to do such kind of job, store each job's info in a queue, so you can cancel it and get its status. You can use a node module like kue.

Categories