I have to monitor a process continuously and I use the process ID to monitor the process. I wrote a program to send an email once the process had stopped so that I would manually reschedule it, but often I forget to reschedule the process ( basically another python program). I then came across the apscheduler module and used the cron style scheduling ( http://packages.python.org/APScheduler/cronschedule.html) to spawn a process once it has stopped. Now, I am able to spawn the process once PID of the process has been killed, but when I spawn it using the apscheduler I am not able to get the process id (PID) of the newly scheduled process; Hence, I am not able to monitor the process. Is there a function in apscheduler to get the process ID of the scheduled process?
Instead of relying on APSchedule to return the pid, why not have your program report the pid itself. It's quite common for daemons to have pidfiles, which are files at a known location that just contain the pid of the running process. Just wrap your main function in something like this:
import os
try:
with open("/tmp/myproc.pid") as pidfile:
pidfile.write(str(os.getpid()))
main()
finally:
os.remove("/tmp/myproc.pid")
Now whenever you want to monitor your process you can firstly check to see in the pid file exists, and if it does, retrieve the pid of the process for further monitoring. This has the benefit of being independent of a specific implementation of cron, and will make it easier in future if you want to write more programs that interact with the program locally.
Related
I'm trying to use win32com.client in order to interact with scheduled tasks on windows.
So far it's working fine, but I'm having trouble figuring out how to get the PID of a running process created by the scheduled task.
Documentation says to look at the value of IRunningTask::get_EnginePID method. However, while I can do something like:
scheduler = win32com.client.Dispatch('Schedule.Service')
scheduler.Connect()
folder = scheduler.GetFolder('\\')
task = folder.GetTask('Name of created task')
task.State
task.Name
I'm not sure how to access this EnginePID attribute as task.EnginePID or anything like that doesn't work.
So for example, I have a task that launches calc.exe. I want to find the PID of the calc.exe or whatever process spawned by the scheduled task.
How can I achieve this?
I used multiprocessing.Pool to imporove the performance of my Python server, when a task failed, I want to terminate it's child processes immediately.
I found that, if I create a process using Process, the terminate method can meets my needs, but if I create a process with Pool.apply_async, the return type is 'ApplyResult', and it can't terminate the corresponding process.
Is there any other way to do it?
I have a daemon process that keeps on running which I created using runit package. I want daemon process to listen to a table and perform tasks based on the column of the table which says what task it needs to perform.
EG: table 'A' has column job_type.
I was thinking of forking child processes from this daemon process every time it gets a new task to perform (based on the new row inserted in the table A which daemon listens to).
The multiprocessing module says I can't or shouldn't fork child processes from daemon as if it dies, the children processes are orphaned.
What is a good approach to achieve that Daemons listens to table, based on column value,forks child processes (all independent of each other) which does the task and goes back to the daemon and dies.
I need to use some locking mechanism if the child processes are accessing shared data and modifying it..
I assume the daemon process you have is also spawned from a python script which called multiprocess with daemon=true.
In this case the daemon is running implies that your creator process is still running, so you can just send it a message via pipes to spawn a new process for you. If your daemon needs to talk with this, use sockets or any ipc method of your choice.
I am trying to implement a job queuing system like torque PBS on a cluster.
One requirement would be to kill all the subprocesses even after the parent has exited. This is important because if someone's job doesn't wait its subprocesses to end, deliberately or unintentionally, the subprocesses become orphans and get adopted by process init, then it will be difficult to track down the subprocesses and kill them.
However, I figured out a trick to work around the problem, the magic trait is the cpu affinity of the subprocesses, because all subprocesses have the same cpu affinity with their parent. But this is not perfect, because the cpu affinity can be changed deliberately too.
I would like to know if there are anything else that are shared by parent process and its offspring, at the same time immutable
The process table in Linux (such as in nearly every other operating system) is simply a data structure in the RAM of a computer. It holds information about the processes that are currently handled by the OS.
This information includes general information about each process
process id
process owner
process priority
environment variables for each process
the parent process
pointers to the executable machine code of a process.
Credit goes to Marcus Gründler
Non of the information available will help you out.
But you can maybe use that fact that the process should stop, when the parent process id becomes 1(init).
#!/usr/local/bin/python
from time import sleep
import os
import sys
#os.getppid() returns parent pid
while (os.getppid() != 1):
sleep(1)
pass
# now that pid is 1, we exit the program.
sys.exit()
Would that be a solution to your problem?
hi lets assume i have a simple programm in python. This programm is running every five minutes throught cron. but i dont know how to write it so the programm will allow to run multiple processes of its self simultaneously. i want to speed things up ...
I'd handle the forking and process control inside your main python program. Let the cron spawn only a single process and that process be a master for (possible multiple) worker processes.
As for how you can create multiple workers, there's the threading module for multi threading and multiprocessing module for multi processing. You can also keep your actual worker code as separate files and use the subprocess module.
Now that I think about it, maybe you should use supervisord to do the actual process control and simply write the actual work code.