Python threading script execution in Flask Backend - python

Currently i'm trying to use proper threading to execute a bunch of scripts.
They are sorted like that:
Main Thread (Runs the Flask app)
-Analysis Thread (Runs the analysis script which invokes all needed scripts)
-3 different functions executed as thread (Divided in 3 parts so the analysis runs quicker)
My problem is i have a global variable with the analysis thread to be able to determine after the call wether the thread is running or not. The first time it does start and running just fine. Then you can call that endpoint as often as you like it wont do anything because i return a 423 to state that the thread (the analysis) is still running. After all scripts are finished, the if clause with analysis_thread.isAlive() returns false as it should and tries to start the analysis again with analysis_thread.start() but that doesn't work, it throws an exception saying the thread is already active and can't be started twice.
Is there a way to achieve that the script can be started and while it is running it returns another code but when it is finished i can start it again ?
Thanks for reading and for all your help
Christoph

The now hopefully working solution is to never stop the thread and just let it wait.
in the analysis script i have a global variable which indicates the status it is set to False by default.
inside the function it runs two whiles:
while True:
while not thread_status:
time.sleep(30)
execution of the other scripts.
thread_status = False # to ensure the execution runs just once.
I then just set the flag to True from the Controller class so it starts executing

Related

Create a Process in Python that Starts After Main Process Ends

I have a python script named "prog.py". I want to add a feature that opens a new process that watches the operation of the current script. When the script terminates, the process recognizes the termination and then invokes a certain function. Here is a pseudo-code:
while (script is active):
sleep(1) # check its status once a second
func()
Do you have any idea how to do it?
Is there a reason the other process needs to be launched first? Seems like you could do this more efficiently and reliably by just execing when the first process completes. For example:
import atexit
import os
atexit.register(os.execlp, 'afterexitscript.py', 'afterexitscript.py', 'arg1', 'arg2')
When the current Python process exits, it will seamlessly replace itself with your other script, which need not go to the trouble of including a polling loop. Or you could just use atexit to execute whatever func is directly in your main script and avoid a new Python launch.

Python sys.exit() not working in Windows 7 command prompt

So I have a python script which I run in the command prompt as you'd expect
python myscript.py
Inside my script I have a line that intercepts Ctrl+C and is suppose to tidy up a few things and shut down the script.
def check_for():
while True:
# Perform check and operation if needed
if __name__ == '__main__':
try:
threading.Thread(target=check_for).start()
# Perform script duties
except KeyboardInterrupt:
print "Tidying up a few things"
# Performs tidying duties
sys.exit()
But it doesn't seem to actual exit the program. It prints the message so I know its getting that interrupt signal, but I don't have control back in the command prompt and when I look in Task Manager I see the python process for it still running. I only regain control once I end that process. In fact, even if the program runs to completion normally without being interrupted the process remains.
My program does spawn child threads, which is why I use sys.exit() over other methods which I heard can't be excepted and do more abrupt ends.
Am I doing something wrong? Is there a cross platform method of doing this (which is what I thought sys.exit() was)?
Edit: So I've isolated the issue. I created a small test script and it seems to be when I create a thread it just doesn't die and thus the program never exits. I've changed my initial code to show the setup.

Passing values between thread.timer runs

I'm scratching my head here, not sure if this is the right way to approach it but as of right now I can't think of another way(open to suggestions). So I am running the BACpypes library, which requires you to create a device, application then call the run() which initiates the the device on the network.
What I am trying to do is send a write_property command to the device every couple of minutes but the problem is I can only do so after i call the run() method(which initializes and hosts the device), which as soon as I do, nothing beyond that method gets called until I stop the program completely because it's a single threaded application
So I thought I'd create a method called Update which will run every 30 seconds and try and write to the device using thread.Timer(since it then runs on a seperate thread). The issue I'm having is that the Update method I use to write to the device can't be executed until I run the run() command, but I have to execute my method before the run() command otherwise it will never execute. Basically what I want to know is can I send a bool to my Update method that will prevent it from running the write_property the first time so that it can wait till run() has been executed, then every time after that it can try to write to it? Perhaps just add a try/catch and skip ?
example of what the code looks like: (this is my main try block)
isFirstRun = False
try:
test_device = LocalDeviceObject(...)
this_application = Application(test_device, args.ini.address)
Update(None,this_application, isFirstRun)
run()
Update method:
def Update(client, app, isFirstRun):
threading.Timer(30.0, Update, [client,app, isFirstRun]).start()
if the run() method hasnt been called yet
skip
else if it has
execute rest of code
Instead of calling Update directly, why not call threading.Timer(...) in your main thread as well? That way Update won't be run initially until after 30 seconds have passed which appears to be the same as what you are doing with a the booleans, but a lot less clunky.

Constantly monitor a program/process using Python

I am trying to constantly monitor a process which is basically a Python program. If the program stops, then I have to start the program again. I am using another Python program to do so.
For example, say I have to constantly run a process called run_constantly.py. I initially run this program manually, which writes its process ID to the file "PID" (in the location out/PROCESSID/PID).
Now I run another program which has the following code to monitor the program run_constantly.py from a Linux environment:
def Monitor_Periodic_Process():
TIMER_RUNIN = 1800
foo = imp.load_source("Run_Module","run_constantly.py")
PROGRAM_TO_MONITOR = ['run_constantly.py','out/PROCESSID/PID']
while(1):
# call the function checkPID to see if the program is running or not
res = checkPID(PROGRAM_TO_MONITOR)
# if res is 0 then program is not running so schedule it
if (res == 0):
date_time = datetime.now()
scheduler.add_cron_job(foo.Run_Module, year=date_time.year, day=date_time.day, month=date_time.month, hour=date_time.hour, minute=date_time.minute+2)
scheduler.start()
scheduler.get_jobs()
time.sleep(TIMER_NOT_RUNIN)
continue
else:
#the process is running sleep and then monitor again
time.sleep(TIMER_RUNIN)
continue
I have not included the checkPID() function here. checkPID() basically checks if the process ID still exists (i.e. if the program is still running) and if it does not exist, it returns 0. In the above program, I check if res == 0, and if so, then I use Python's scheduler to schedule the program. However, the major problem that I am currently facing is that the process ID of this program and the run_constantly.py program turns to be same once I schedule the run_constantly.py using the scheduler.add_cron_job() function. So if the program run_constantly.py crashes, the following program still thinks that the run_constantly.py is running (since both process IDs are same), and therefore continues to go into the else loop to sleep and monitor again.
Can someone tell me how to solve this issue? Is there a simple way to constantly monitor a program and reschedule it when it has crashed?
There are many programs that can do this.
On Ubuntu there is upstart (installed by default)
Lots of people like http://supervisord.org/
monit as mentioned by #nathan
If you are looking for a python alternative there is a library that has just been released called circus which looks interesting.
And pretty much every linux distro probably has one of these built in.
The choice is really just down to which one you like better, but you would be far better off using one of these than writing it yourself.
Hope that helps
If you are willing to control the monitored program directly from python instead of using cron, have a look at the subprocess module :
The subprocess module allows you to spawn new processes,
connect to their input/output/error pipes, and obtain their return codes.
Check examples like track process status with python on SO for examples and references.
You could just use monit
http://mmonit.com/monit/
It monitors processes and restarts them (and other things.)
I thought I'd add a more versatile solution, which is one that I personally use all the time as well.
It's name is Immortal (source is at https://github.com/immortal/immortal)
To have it monitor and instantly restart a program if it stops, simply run the following command:
immortal <command>
So in your case I would run run_constantly.py like so:
immortal python run_constantly.py
The command ps aux | grep run_constantly.py should return 2 process IDs, one for the Immortal command, and one for the separate command Immortal started (just the regular command. As long as the Immortal process is running, run_constantly.py will stay running.

Graceful Handling of Segfault

I'm writing a program in Python that uses a closed source API in Linux. The API sometimes works, and sometimes segfaults - crashing my program also. However, if the program runs for 10 seconds, its past the point where it has a chance of segfaulting and runs forever (errors only happen in the beginning).
I think I need some type of script that:
starts my python program,
waits 10 seconds,
checks if python is still running
if it is running, the script should end itself without ending python
if python is NOT running, then repeat.
Is such a program possible? Will a segfault kill the script also?
Yes, such a program is perfectly possible. You just have to run these two programs in separate processes - SEGFAULT only kills the process in which it has occured.
If you are under Linux, you can use either bash or python if you want. Just start the script that is failling in separate process. Code in python could look similar to this:
import subprocess
import time
start = time.clock()
ret = subprocess.call(['myprog', 'myarg0', ...])
end = time.clock()
if end - start > threshold:
restart()
Also, maybe a return code from such process has some meaningful value when it has finished because of SEGFAULT.
Can you isolate the calls to this buggy API inside a child process? That way you can check the exit status and handle crashes within a Try ... Catch

Categories