Constantly monitor a program/process using Python - python

I am trying to constantly monitor a process which is basically a Python program. If the program stops, then I have to start the program again. I am using another Python program to do so.
For example, say I have to constantly run a process called run_constantly.py. I initially run this program manually, which writes its process ID to the file "PID" (in the location out/PROCESSID/PID).
Now I run another program which has the following code to monitor the program run_constantly.py from a Linux environment:
def Monitor_Periodic_Process():
TIMER_RUNIN = 1800
foo = imp.load_source("Run_Module","run_constantly.py")
PROGRAM_TO_MONITOR = ['run_constantly.py','out/PROCESSID/PID']
while(1):
# call the function checkPID to see if the program is running or not
res = checkPID(PROGRAM_TO_MONITOR)
# if res is 0 then program is not running so schedule it
if (res == 0):
date_time = datetime.now()
scheduler.add_cron_job(foo.Run_Module, year=date_time.year, day=date_time.day, month=date_time.month, hour=date_time.hour, minute=date_time.minute+2)
scheduler.start()
scheduler.get_jobs()
time.sleep(TIMER_NOT_RUNIN)
continue
else:
#the process is running sleep and then monitor again
time.sleep(TIMER_RUNIN)
continue
I have not included the checkPID() function here. checkPID() basically checks if the process ID still exists (i.e. if the program is still running) and if it does not exist, it returns 0. In the above program, I check if res == 0, and if so, then I use Python's scheduler to schedule the program. However, the major problem that I am currently facing is that the process ID of this program and the run_constantly.py program turns to be same once I schedule the run_constantly.py using the scheduler.add_cron_job() function. So if the program run_constantly.py crashes, the following program still thinks that the run_constantly.py is running (since both process IDs are same), and therefore continues to go into the else loop to sleep and monitor again.
Can someone tell me how to solve this issue? Is there a simple way to constantly monitor a program and reschedule it when it has crashed?

There are many programs that can do this.
On Ubuntu there is upstart (installed by default)
Lots of people like http://supervisord.org/
monit as mentioned by #nathan
If you are looking for a python alternative there is a library that has just been released called circus which looks interesting.
And pretty much every linux distro probably has one of these built in.
The choice is really just down to which one you like better, but you would be far better off using one of these than writing it yourself.
Hope that helps

If you are willing to control the monitored program directly from python instead of using cron, have a look at the subprocess module :
The subprocess module allows you to spawn new processes,
connect to their input/output/error pipes, and obtain their return codes.
Check examples like track process status with python on SO for examples and references.

You could just use monit
http://mmonit.com/monit/
It monitors processes and restarts them (and other things.)

I thought I'd add a more versatile solution, which is one that I personally use all the time as well.
It's name is Immortal (source is at https://github.com/immortal/immortal)
To have it monitor and instantly restart a program if it stops, simply run the following command:
immortal <command>
So in your case I would run run_constantly.py like so:
immortal python run_constantly.py
The command ps aux | grep run_constantly.py should return 2 process IDs, one for the Immortal command, and one for the separate command Immortal started (just the regular command. As long as the Immortal process is running, run_constantly.py will stay running.

Related

How to run a python process in the background continuosly

I'm trying to build a todo manager in python where I want to continuously run a process in the bg that will alert the user with a popup when the specified time comes. I'm wondering how I can achieve that.
I've looked at some of the answers on StackOverflow and on other sites but none of them really helped.
So, What I want to achieve is to start a bg process once the user enters a task and keep on running it in the background until the time comes. At the same time there might be other threads running for other tasks as well that will end at their end times.
So far, I've tried this:
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task.
t.start()
t.join()
With this the thread is continuosly running but it runs in the foreground and only exits when the execution is done.
If I add t.daemon = True in the above code, the main thread immediately exits after start() and it looks like the daemon is also getting killed then.
Please let me know how this can be solved.
I'm guessing that you just don't want to see the terminal window after you launch the script. In this case, it is a matter of how you execute the script.
Try these things.
If you are using a windows computer you can try using pythonw.exe:
pythonw.exe example_script.py
If you are using linux (maybe OSx) you may want to use 'nohup' in the terminal.
nohup python example_script.py
More or less the reason you have to do this comes down to how the Operating system handles processes. I am not an expert on this subject matter, but generally if you launch a script from a terminal, that script becomes a child process of the terminal. So if you exit that terminal, it will also terminate any child processes. The only way to get around that is to either detach the process from the terminal with something like nohup.
Now if you end up adding the #!/usr/bin/env python shebang line, your os could possibly just run the script without a terminal window if you just double click the script. YMMV (Again depends on how your OS works)
The first thing you need to do is prevent your script from exiting by adding a while loop in the main thread:
import time
from threading import Thread
t = Thread(target=bg_runner, kwargs={'task': task, 'lock_file': lock_file_path})
t.setName("Get Done " + task)
t.start()
t.join()
while True:
time.sleep(1.0)
Then you need to put it in the background:
$ nohup python alert_popup.py >> /dev/null 2>&1 &
You can get more information on controlling a background process at this answer.

Parallel Python for loop

I work primarily with arcgis and pci flavours of python 2.7. I have a number of processes that I've created that run outside of these programs but use these libraries. They are run via .bat files through cmd.
Currently, they run the processes in a series of for loops. And each for loop processes sequentially. I was wondering if there was a way to run the processing within the for loop for each object in the list at the same time. That is in parallel. The only way I can think of this is opening a cmd for each object in the list, and running the processing separately.
Is what I am asking even possible? Where should I look for solutions?
Look into Subprocess So youd want a new commandline window created in the background where test.bat runs in parallel.. and in your case you don't want to wait for the command to complete before you continue your program, so use subprocess.Popen instead (may be something to look into as
well)
subprocess.call
Run the command described by args. Wait for command to complete, then return the returncode attribute.
If you want to start an external program from your python script pass the program's filename to subprocess.Popen() on Ubuntu Linux you would enter something like
>>>import subprocess
>>>subprocess.Popen('/usr/bin/gnome-...')
<subprocess.Popen Object at 0x7f2bcf93b20
The Return value is a Popen object which has two useful methods : poll() & wait()
poll() is like asking your friend if he has finished running the code you gave him.
wait() is like waiting for your friend to finish working on his code before you keep working on yours.(something you might want to look into)

Python - Run Multiple Scripts At Same Time Methods

I have a bunch of .py scripts as part of a project. Some of them i want to start and have running in the background whilst the others run through what they need to do.
For example, I have a script which takes a Screenshot every 10 seconds until the script is closed and i wish to have this running in the background whilst the other scripts get called and run through till finish.
Another example is a script which calculates the hash of every file in a designated folder. This has the potential to run for a fair amount of time so it would be good if the rest of the scripts could be kicked off at the same time so they do not have to wait for the Hash script to finish what it is doing before they are invoked.
Is Multiprocessor the right method for this kind of processing, or is there another way to achieve these results which would be better such as this answer: Run multiple python scripts concurrently
You could also use something like Celery to run the tasks async and you'll be able to call tasks from within your python code instead of through the shell.
It depends. With multiprocessing you can create a process manager, so it can spawn the processes the way you want, but there are more flexible ways to do it without coding. Multiprocessing is usually hard.
Check out circus, it's a process manager written in Python that you can use as a library, standalone or via remote API. You can define hooks to model dependencies between processes, see docs.
A simple configuration could be:
[watcher:one-shot-script]
cmd = python script.py
numprocesses = 1
warmup_delay = 30
[watcher:snapshots]
cmd = python snapshots.py
numprocesses = 1
warmup_delay = 30
[watcher:hash]
cmd = python hashing.py
numprocesses = 1

Python and Scheduling Computation

I wish to schedule a computation to occur after my current computation in Python is finished. Note that my Python interpreter is running through emacs.
For example I am currently running:
>>> for i in range(2, 5):
... tn.TweetNetwork.create_subnetworks(i)
...
I made a simple mistake and meant to type range(1,5). This has been running for at least 4 hours and should run for another few hours. That being said I do not want to re-execute the loop with the correction and lose all that has been computed.
As I am not by the computer 24/7, how can I schedule Python to execute the function `tn.TweetNetwork.create_subnetworks(1)?
I use emacs 24.3 and ubuntu 12.04 LTS, let me know if you need more information. All help is greatly appreciated!
EDIT: I like the answer posted, however I do not know how to find the PID. I am running a Python interpreter through emacs. So how would I find that out?
This was too much for the comment, but this isn't a complete reply.
To get a process started by Emacs:
M-x list-processes,
identify the process you want to get the id of
M-:(process-id (get-process "name-of-the-process")).
But this will give you the process of the interpreter, not any other process started from it.
If you then need to get all processes spawned through that process, you can do:
$ pstree PID
Where PID is the one you obtained earlier from Emacs.
I think, the easiest way is to write another script that wait until your process finished and runs tn.TweetNetwork.create_subnetworks(1). This will work only if your create_subnetworks does not access any global variables and does and write all results into database/file/etc.
# Write script similar to these
import os, time
print "Wait until old script completed..."
while os.path.exists("/proc/SCRIPT_PID"):
time.sleep(1)
print "Execute create_subnetworks..."
tn = ...
tn.TweetNetwork.create_subnetworks(1)
Connect to your computer by SSH, get process id by ps axu | grep script_name and run this new script.
If Tyler comment does not help, you may eval the following piece of code:
(defun foo (ignored)
(remove-hook 'comint-output-filter-functions 'foo)
(run-with-timer 1 nil (lambda()
(goto-char (point-max))
(insert "tn.TweetNetwork.create_subnetworks(1)")
(comint-send-input))))
(add-hook 'comint-output-filter-functions 'foo)
It defines a function that will insert the command you need to insert in the python inferior buffer, a second after the invocation of that function (the delay is for avoid recursive loops).
Then it setup the invocation of that function upon the event where the inferior process (python, in your case) writes anything. In your case, that would be the ">>>" prompt, that python writes when ready. If your code is generating output, this approach won't work.
If you are using comint in other buffers (shell, sql, ...) you would need to make variable comint-output-filter-functions local to your python interactive buffer (with make-variable-buffer-local)

Graceful Handling of Segfault

I'm writing a program in Python that uses a closed source API in Linux. The API sometimes works, and sometimes segfaults - crashing my program also. However, if the program runs for 10 seconds, its past the point where it has a chance of segfaulting and runs forever (errors only happen in the beginning).
I think I need some type of script that:
starts my python program,
waits 10 seconds,
checks if python is still running
if it is running, the script should end itself without ending python
if python is NOT running, then repeat.
Is such a program possible? Will a segfault kill the script also?
Yes, such a program is perfectly possible. You just have to run these two programs in separate processes - SEGFAULT only kills the process in which it has occured.
If you are under Linux, you can use either bash or python if you want. Just start the script that is failling in separate process. Code in python could look similar to this:
import subprocess
import time
start = time.clock()
ret = subprocess.call(['myprog', 'myarg0', ...])
end = time.clock()
if end - start > threshold:
restart()
Also, maybe a return code from such process has some meaningful value when it has finished because of SEGFAULT.
Can you isolate the calls to this buggy API inside a child process? That way you can check the exit status and handle crashes within a Try ... Catch

Categories