myalert.py
from daemon import Daemon
import os, time, sys
class alertDaemon(Daemon):
def run(self):
while True:
time.sleep(1)
if __name__ == "__main__":
alert_pid = '/tmp/ex.pid'
# if pid doesnt exists run
if os.path.isfile(alert_pid): # is this check enough?
sys.exit(0)
daemon = alertDaemon(alert_pid)
daemon.start()
Given that no other programs or users will create the pid file:
1) Is there a case where pid does not exists yet the daemon process still running?
2) Is there a case where pid does exists yet the daemon isnt running?
Because if answer is yes to at least one of the questions above, then simply checking for the existence of pid file isnt enough if my goal is have one daemon running at all times.
Q: If i have to check for the process then, I am hoping of avoid something like system call ps -ef and grep for the name of the script. Is there a standard way of doing this?
Note: the script, myalert.py, will be a cronjob
The python-daemon library, which is the reference implementation for PEP 3143: "Standard daemon process library", handles this by using a file lock (via the lockfile library) on the pid file you pass to the DaemonContext object. The underlying OS guarantees that the file lock will be released when the daemon process exits, even if its uncleanly exited. Here's a simple usage example:
import daemon
from daemon.pidfile import PIDLockFile
context = daemon.DaemonContext(
pidfile= PIDLockFile('/var/run/spam.pid'),
)
with context:
main()
So, if a new instance starts up, it doesn't have to determine if the process that created the existing pid file is still running via the pid itself; if it can acquire the file lock, then no other instances are running (since they'd have acquired the lock). If it can't acquire the lock, then another daemon instance must be running.
The only way you'd run into trouble is if someone came along and manually deleted the pid file while the daemon was running. But I don't think you need to worry about someone deliberately breaking things in that way.
Ideally, python-daemon would be part of the standard library, as was the original goal of PEP 3143. Unfortunately, the PEP got deferred, essentially because there was no one willing to actually do the remaining work needed to get in added to the standard library:
Further exploration of the concepts covered in this PEP has been
deferred for lack of a current champion interested in promoting the
goals of the PEP and collecting and incorporating feedback, and with
sufficient available time to do so effectively.
Several ways in which I saw this implemented:
Check wheter pidfile exists -> if so, exit with an error message like "pid file exists -- rm it if you're sure no process is running"
Check whether pidfile exists -> if so, check whether process with that pid exists -> if that's the case, die telling the user "process is running..". The risk of conflicting (reused for another process) PID number is so small that it simply is ignored; telling the user how to make the program start again in case an error occurred
Hint: to check for a process existence, you can check for the /proc/<pid> directory
Also make sure you do all the possible to remove the pidfile when your script exits, eg:
Wrap code in a try .. finally:
# Check & create pidfile
try:
# your application logic
finally:
# remove pidfile
You can even install signal handlers (via the signal module) to remove pidfile upon receiving signals that would not normally raise an exception, but instead exit directly.
Related
I am unable to post code for this sorry, but I am trying to run a python script at all times from another python script which creates a system tray. This system tray will show if the program is correctly running or not.
I have tried a few methods so far, the most promising method has been using something like:
p = subprocess.Popen([xxx, xxx], stdout=PIPE, stderr=PIPE)
Then I check if stderr has any output meaning there’s been an error.
However, this only works when I deliberately make an error occur (using the wrong file path) and nothing happens when I use the correct file path as the program never terminates.
So the main issue I’m having is that because I want the program to be running at all times it never terminates unless there’s an error. I want to be able to check that is it running so the user can check the status on the system tray.
In Unix OS we use certain system calls for process management. The process is started when its parent process executes a fork system call. The parent process may then wait() for the child process to terminate either normally by using the exit() system call, or abnormally due to a fatal exception or signal (e.g., SIGTERM, SIGINT, SIGKILL), the exit status is returned to the OS and a SIGCHLD signal is sent to the parent process. The exit status can then be retrieved by the parent process via the wait system call to learn about what actually happened. Cheers
OS: Windows 10
Python: 3.5.2
I am trying to open calc.exe do some actions and than close it.
Here is my code sample
import subprocess, os, time
p = subprocess.Popen('calc.exe')
#Some actions
time.sleep(2)
p.kill()
So this is not working for calc.exe, it just opens the calculator, but does not close it, But same code is working fine for "notepad.exe".
I am guessing that there is a bug in subprocess lib for process kill method. so the notepad.exe process name in task manager is notepad.exe, but the calc.exe process name is calculator.exe, so I am guessing it is trying to kill by name and do not find it.
There's no bug in subprocess.kill. If you're really worried about that, just check the source, which is linked from the docs. The kill method just calls send_signal, which just calls os.kill unless the process is already done, and you can see the Windows implementation for that function. In short: subprocess.Process.kill doesn't care what name the process has in the kernel's process table (or the Task Manager); it remembers the PID (process ID) of the process it started, and kills it that way.
The most likely problem is that, like many Windows apps, calc.exe has some special "single instance" code: when you launch it, if there's already a copy of calc.exe running in your session, it just tells that copy to come to the foreground (and open a window, if it doesn't have one), and then exits. So, by the time you try to kill it 2 seconds later, the process has already exited.
And if the actual running process is calculator.exe, that means calc.exe is just a launcher for the real program, so it always tells calculator.exe to come to the foreground, launching it if necessary, and then exits.
So, how can you kill the new calculator you started? Well, you can't, because you didn't start a new one. You can kill all calc.exe and/or calculator.exe processes (the easiest way to do this is with a third-party library like psutil—see the examples on filtering and then kill the process once you've found it), but that will kill any existing calculator process you had open before running your program, not just the new one you started. Since calc.exe makes it impossible to tell if you've started a new process or not, there's really no way around that.
This is one way to kill it, but it will close every open calculator.
It calls a no window command prompt and gives the command to close the Calculator.exe process.
import subprocess, os, time
p = subprocess.Popen('calc.exe')
print(p)
#Some actions
time.sleep(2)
CREATE_NO_WINDOW = 0x08000000
subprocess.call('taskkill /F /IM Calculator.exe', creationflags=CREATE_NO_WINDOW)
Solution:
Thanks to Rick Sanders, adding this function after terminating a process resolves the issue:
os.waitpid(pid, options)
Zombie processes are created when a process is terminated, and unless they are reaped (by requesting exit code). They remain for the purpose that the parent can request it's exit code, and as my script does not truly exit, it's process is replaced by execv(file, args), the parent never requests the exit code and the zombie process is kept. This works on both my OSX and Debian systems.
I am working on a very large script and have recently implemented multiprocessing and IMAP to listen for emails. Before I implemented this I had implemented a restart command that I can enter at command-line to refresh the script after editing, in a nutshell it does this:
if ipt = ':rs':
execv(__file__)
It prints out a bunch of crap in interim, though.
I also have a process running in another object, that listens to Google's IMAP server in a While-loop like so:
While True:
mail = imaplib.IMAP4_SSL('imap.gmail.com')
mail.login('myemail#gmail', 'mypassword')
mail.list()
mail.select("inbox")
result, data = mail.uid('search', None, 'All')
latest_email_uid = data[0].split()[-1] #grabs the most recent email by
#unique id number
if int(latest_email_uid) != int(last_email_uid): # set earlier from sql
# database
# do stuff with the mail
else:
continue
Through watching top, I noticed I was creating zombies when I restarted, so I created a termination function:
def process_terminator(self):
self.imap_listener.terminate()
And I called it from restart:
if ipt == ':rs':
self.process_object.terminate()
execv(__file__)
However, the zombie processes still persist. So, after a few hours of work I realized that adding a time.sleep period after calling the function AND either setting a local variable to the process' exitcode OR printing the process' exitcode would allow the process to terminate, even if it was just 0.1 second:
if ipt == ':rs':
self.process_object.terminate()
time.sleep(.1)
print(self.process_object.imap_listener.exitcode)
execv(__file__)
This is not the case in OSX, though, simply executing a process' .terminate() function ends the process, however on my debian machine, I HAVE to have a sleep(n) period AND HAVE to refer to a process' exitcode in some form or fashion to prevent it from zombying.
I have also tried using .join, though that hangs up my entire script. I have tried creating variables to have the process break its while loop when (for example) self.terminated = 1, then join, however that does not work either.
I don't have this issue when running exec('quit'), so long as I terminate then process, .join() does not work.
Can someone please point out any misunderstandings on my part? I have tried doing my own research but have not found a sufficient solution, and I am aware that processes should not be explicitly terminated as they will not exit nicely, but I've found no other way after hours of work.
Sorry that I do not have more code to provide, will do my best to provide more if needed, these are just snippets of relevant code from my script (1000+ lines).
You might start here: https://en.wikipedia.org/wiki/Zombie_process. The parent process has to reap its children when they exit, for example by using waitpid():
os.waitpid(pid, options)
Waits for a particular child process to terminate and returns the pid of the deceased process, or -1 if there is no such child process. On some systems, a value of 0 indicates that there are processes still running.
I have a program that needs to know if a certain process (also part of the program, but running as a daemon) owned by root exists. The process is started from within the program using pkexec so that the program itself can run as a normal user.
Normally, if I need to know if a process is running, I would use os.kill(pid, 0) and catch the resulting exception. Unfortunately, in this case, Python simply spits an OSError: [Errno 1] Operation not permitted, regardless of whether the process exists or not.
Apart from manually parsing the output of ps aux | grep myprogram, is there a simple way of knowing if the process exists without resorting to an external library like psutils? psutils seems like an awfully large dependency to add for such a simple task.
os.geteuid()
"Return the current process’s effective user id."
root's effective uid is zero:
if os.geteuid() == 0:
print('running as root')
else:
print('no root for you')
If you know the pid you can use psutil:
if psutil.Process(the_pid).is_running():
print('Process is running')
else:
print('Process is *not* running')
Bonus points: this works with python from 2.4 to 3.3 and with linux, OS X, Windows, FreeBSD, Sun Solaris and probably more.
Checking whether /proc/the-pid exists only works on *nix machines, not on windows.
Note also that simply checking /proc/the-pid is not enough to conclude that the process is running. The OS is free to reuse the pids, hence if the process ended and a different process was spawned with the same pid you are screwed.
You must also save somewhere the creation time of the original process. Then to check if the process exist you should first check /proc/the-pid and then check that the creation time of that process matches what you saved. psutil does this automatically.
I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these.
When the image-viewing process exists, I want to remove the temporary images.
I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following:
p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"])
p.communicate()
os.unlink("/example/image1.jpg")
os.unlink("/example/image2.jpg")
..as this blocks the main thread, nor could I check for the pid exiting in a thread etc
The only solution I can think of means I have to use shell=True, which I would rather avoid:
import pipes
import subprocess
cmd = ['imgviewer']
cmd.append("/example/image2.jpg")
for x in cleanup:
cmd.extend(["&&", "rm", pipes.quote(x)])
cmdstr = " ".join(cmd)
subprocess.Popen(cmdstr, shell = True)
This works, but is hardly elegant..
Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.
If you're on any variant of Unix, you could fork your Python program, and have the parent process go on with its life while the child process daemonized, runs the viewer (doesn't matter in the least if that blocks the child process, which has no other job in life anyway;-), and cleans up after it. The original Python process may or may not exist at this point, but the "waiting to clean up" child process of course will (some process or other has to do the clean-up, after all, right?-).
If you're on Windows, or need cross-platform code, then have your Python program "spawn" (i.e., just start with subprocess, then go on with life) another (much smaller) one, which is the one tasked to run the viewer (blocking, who cares) and then do the clean-up. (If on Unix, even in this case you may want to daemonize, otherwise the child process might go away when the parent process does).