I have a program that needs to know if a certain process (also part of the program, but running as a daemon) owned by root exists. The process is started from within the program using pkexec so that the program itself can run as a normal user.
Normally, if I need to know if a process is running, I would use os.kill(pid, 0) and catch the resulting exception. Unfortunately, in this case, Python simply spits an OSError: [Errno 1] Operation not permitted, regardless of whether the process exists or not.
Apart from manually parsing the output of ps aux | grep myprogram, is there a simple way of knowing if the process exists without resorting to an external library like psutils? psutils seems like an awfully large dependency to add for such a simple task.
os.geteuid()
"Return the current process’s effective user id."
root's effective uid is zero:
if os.geteuid() == 0:
print('running as root')
else:
print('no root for you')
If you know the pid you can use psutil:
if psutil.Process(the_pid).is_running():
print('Process is running')
else:
print('Process is *not* running')
Bonus points: this works with python from 2.4 to 3.3 and with linux, OS X, Windows, FreeBSD, Sun Solaris and probably more.
Checking whether /proc/the-pid exists only works on *nix machines, not on windows.
Note also that simply checking /proc/the-pid is not enough to conclude that the process is running. The OS is free to reuse the pids, hence if the process ended and a different process was spawned with the same pid you are screwed.
You must also save somewhere the creation time of the original process. Then to check if the process exist you should first check /proc/the-pid and then check that the creation time of that process matches what you saved. psutil does this automatically.
Related
I am unable to post code for this sorry, but I am trying to run a python script at all times from another python script which creates a system tray. This system tray will show if the program is correctly running or not.
I have tried a few methods so far, the most promising method has been using something like:
p = subprocess.Popen([xxx, xxx], stdout=PIPE, stderr=PIPE)
Then I check if stderr has any output meaning there’s been an error.
However, this only works when I deliberately make an error occur (using the wrong file path) and nothing happens when I use the correct file path as the program never terminates.
So the main issue I’m having is that because I want the program to be running at all times it never terminates unless there’s an error. I want to be able to check that is it running so the user can check the status on the system tray.
In Unix OS we use certain system calls for process management. The process is started when its parent process executes a fork system call. The parent process may then wait() for the child process to terminate either normally by using the exit() system call, or abnormally due to a fatal exception or signal (e.g., SIGTERM, SIGINT, SIGKILL), the exit status is returned to the OS and a SIGCHLD signal is sent to the parent process. The exit status can then be retrieved by the parent process via the wait system call to learn about what actually happened. Cheers
This question already has answers here:
Make sure only a single instance of a program is running
(23 answers)
Closed 4 years ago.
Let's suppose we have a long script: Foo.pyx
If I execute it two times, they are going to run simultaneously.
How can I terminate the first one if I execute it two times?
Edit: terminating the later ones can also work, I just want only one instance to run at any time.
The traditional POSIX answer to this is to use a pidfile: a file stored in a consistent location that just holds the PID of the process.
The reason you want to store the PID is so that if the script is killed, a new copy of your program will be able to see that and take over, instead of falsely reporting that another copy is already running. (You can send a 0 signal with os.kill; if it succeeds, the process exists.)
There are libraries to do this for you and get all the details right, but a simple version is:
import os
import sys
PIDFILE = '/var/run/myscript.pid'
def is_running():
try:
with open(PIDFILE) as f:
pid = int(next(f))
return os.kill(pid, 0)
except Exception:
return False
if __name__ == '__main__':
if is_running():
print('Another copy is already running')
sys.exit(0)
with open(PIDFILE, 'w') as f:
f.write(f'{os.getpid()}\n')
You probably want cleaner error handling in real code, and you need to think about races, but I've kept it simple to make the idea clear. (If you want to do this for real, you're probably better off using a library, or thinking through all the issues yourself, than copying code off SO.)
The traditional Windows answer is a little different. On Windows, it's very easy to get a mandatory exclusive lock on a file or other resource that's automatically released on exit. In fact, that's what happens by default when you open a file—but, unfortunately, it’s not what happens with Python’s open, which goes out of its way to allow sharing. You can use a library like win32api from PyWin32, or just ctypes, to call CreateFile requesting exclusive access. If you fail because the file is locked, there's another process running.
It’s also important to decide where to create the file. Normally you use a per-user temp directory rather than a global one, which means each user can have one copy of the program running, rather than one copy for the entire system.
When the script starts, check if a specific file exists. A good name choice is myscript.lock, but any name will do.
If it exists, the script should exit immediately. Otherwise create an empty file of that name and proceed with the script. Then remove the file when the script finishes.
myalert.py
from daemon import Daemon
import os, time, sys
class alertDaemon(Daemon):
def run(self):
while True:
time.sleep(1)
if __name__ == "__main__":
alert_pid = '/tmp/ex.pid'
# if pid doesnt exists run
if os.path.isfile(alert_pid): # is this check enough?
sys.exit(0)
daemon = alertDaemon(alert_pid)
daemon.start()
Given that no other programs or users will create the pid file:
1) Is there a case where pid does not exists yet the daemon process still running?
2) Is there a case where pid does exists yet the daemon isnt running?
Because if answer is yes to at least one of the questions above, then simply checking for the existence of pid file isnt enough if my goal is have one daemon running at all times.
Q: If i have to check for the process then, I am hoping of avoid something like system call ps -ef and grep for the name of the script. Is there a standard way of doing this?
Note: the script, myalert.py, will be a cronjob
The python-daemon library, which is the reference implementation for PEP 3143: "Standard daemon process library", handles this by using a file lock (via the lockfile library) on the pid file you pass to the DaemonContext object. The underlying OS guarantees that the file lock will be released when the daemon process exits, even if its uncleanly exited. Here's a simple usage example:
import daemon
from daemon.pidfile import PIDLockFile
context = daemon.DaemonContext(
pidfile= PIDLockFile('/var/run/spam.pid'),
)
with context:
main()
So, if a new instance starts up, it doesn't have to determine if the process that created the existing pid file is still running via the pid itself; if it can acquire the file lock, then no other instances are running (since they'd have acquired the lock). If it can't acquire the lock, then another daemon instance must be running.
The only way you'd run into trouble is if someone came along and manually deleted the pid file while the daemon was running. But I don't think you need to worry about someone deliberately breaking things in that way.
Ideally, python-daemon would be part of the standard library, as was the original goal of PEP 3143. Unfortunately, the PEP got deferred, essentially because there was no one willing to actually do the remaining work needed to get in added to the standard library:
Further exploration of the concepts covered in this PEP has been
deferred for lack of a current champion interested in promoting the
goals of the PEP and collecting and incorporating feedback, and with
sufficient available time to do so effectively.
Several ways in which I saw this implemented:
Check wheter pidfile exists -> if so, exit with an error message like "pid file exists -- rm it if you're sure no process is running"
Check whether pidfile exists -> if so, check whether process with that pid exists -> if that's the case, die telling the user "process is running..". The risk of conflicting (reused for another process) PID number is so small that it simply is ignored; telling the user how to make the program start again in case an error occurred
Hint: to check for a process existence, you can check for the /proc/<pid> directory
Also make sure you do all the possible to remove the pidfile when your script exits, eg:
Wrap code in a try .. finally:
# Check & create pidfile
try:
# your application logic
finally:
# remove pidfile
You can even install signal handlers (via the signal module) to remove pidfile upon receiving signals that would not normally raise an exception, but instead exit directly.
I have a process that starts up cherrypy runs a task and then needs to check if it is still running or has completed it's task. I am running python 2.6.7
while True:
if t1.isAlive():
cherrypy.engine.start()
else:
cherrypy.engine.stop()
print "server down"
There are several ways to do it. It depends what you really want. Do you want to check if a process is still alive and kicking, or do you need some feed-back information?
Or do you just have to check for an output (file/log/db)? Give us some more information. Some code-examples would clarify your problem
I guess you could take a look at the PIDFile plugin. With this plugin you could even do a Multiprocess or a check independent of which process started a cherrypy-instance.
Just after the start you initialize the pidfile-plugin and check anywhere outside if the file exists. Only caveat, it may block on zombie-processes or if the pidfile doesn't get erased.
I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these.
When the image-viewing process exists, I want to remove the temporary images.
I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following:
p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"])
p.communicate()
os.unlink("/example/image1.jpg")
os.unlink("/example/image2.jpg")
..as this blocks the main thread, nor could I check for the pid exiting in a thread etc
The only solution I can think of means I have to use shell=True, which I would rather avoid:
import pipes
import subprocess
cmd = ['imgviewer']
cmd.append("/example/image2.jpg")
for x in cleanup:
cmd.extend(["&&", "rm", pipes.quote(x)])
cmdstr = " ".join(cmd)
subprocess.Popen(cmdstr, shell = True)
This works, but is hardly elegant..
Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.
If you're on any variant of Unix, you could fork your Python program, and have the parent process go on with its life while the child process daemonized, runs the viewer (doesn't matter in the least if that blocks the child process, which has no other job in life anyway;-), and cleans up after it. The original Python process may or may not exist at this point, but the "waiting to clean up" child process of course will (some process or other has to do the clean-up, after all, right?-).
If you're on Windows, or need cross-platform code, then have your Python program "spawn" (i.e., just start with subprocess, then go on with life) another (much smaller) one, which is the one tasked to run the viewer (blocking, who cares) and then do the clean-up. (If on Unix, even in this case you may want to daemonize, otherwise the child process might go away when the parent process does).