Python kill all processes owned by user - python

I need to make a function that can kill all processes owned by user and later to start few.
My main problem is that I cannot figure how to check if all processes were killed, and if there are still running processes, to retry for 1-2 times to kill them, and then return error. I want to use only python code.
Here is my code:
import os
import pwd
def pkill(user):
pids = []
user_pids = []
uid = pwd.getpwnam(user).pw_uid
# get all PID
for i in os.listdir('/proc'):
if i.isdigit():
pids.append(i)
# test if PID is owned by user
for i in pids:
puid = os.stat(os.path.join('/proc', i)).st_uid
if puid == uid:
user_pids.append(i)
# print len(user_pids)
# check of PID still exist and kill it
for i in user_pids:
if os.path.exists(os.path.join('/proc',i)):
try:
os.kill(int(i), 15)
except OSError:
Thank you

The default way to check if a process is running, in Linux (it's POSIX compatible also), is to use kill -0 PID, so here you can simply do an os.kill but with 0 as a signal, if the process is dead it should throw an exception, if it's alive it should not.

can't you do the same thing you did to find the processes? that function should return 0..

Related

How to make a program close it's currently running instance upon startup?

I have a command line program which I'd like to keep running until I open it again, so basically a program which first checks if there's currently any already running instance of itself and kill it.
I tried os.system('TASKKILL /F /IM program.exe') but it turned out to be stupid because it also kills itself.
The most reliable way to make sure there's only one instance of your application is to create a pid file in a known (fixed) location. This location will usually be in your application data folder or in the temporary directory. At startup, you should check if the pid file exists and if the pid contained there still exists and refers to your target process. If it exists, send a kill signal to it, then overwrite the file with your current pid before starting the rest of the application.
For extra safetiness, you may want to wait until the previous process have completely terminated. This can be done by either waiting/polling to check if the process with that pid still exists, or by polling for the killed process to delete its own pid file. The latter may be necessary if process shutdown are very lengthy and you want to allow the current process to already start working while the old process is shutting down.
You can use the psutil library. It simply iterates through all running processes, filter processes with a specific filename and if they have a different PID than the current process, then it kills them. It will also run on any platform, considering you have a right process filename.
import psutil
process_to_kill = "program.exe"
# get PID of the current process
my_pid = os.getpid()
# iterate through all running processes
for p in psutil.process_iter():
# if it's process we're looking for...
if p.name() == process_to_kill:
# and if the process has a different PID than the current process, kill it
if not p.pid == my_pid:
p.terminate()
If just the program filename is not unique enough, you may use the method Process.exe() instead which is returning the full path of the process image:
process_to_kill = "c:\some\path\program.exe"
for p in psutil.process_iter():
if p.exe() == process_to_kill:
# ...
Because my working stations don't have access to internet and installing packages is a mess, I ended up coming up with this solution:
import os
os.system('tasklist > location/tasks.txt')
with open('location/tasks.txt', 'r') as pslist:
for line in pslist:
if line.startswith('python.exe'):
if line.split()[1] != str(os.getpid()):
os.system(f'TASKKILL /F /PID {line.split()[1]}')
break
os.remove('location/tasks.txt')
It prints the output of the tasklist command to a file and then checks the file to see if there's a runnig python process with a different PID from it's own.
edit: Figured out I can do it with popen so it's shorter and there are no files involved:
import os
for line in os.popen('tasklist').readlines():
if line.startswith('python.exe'):
if line.split()[1] != str(os.getpid()):
os.system(f'taskkill /F /PID {line.split()[1]}')
break
You can use the process id of already running instance.
import os
os.system("taskkill /pid <ProcessID>")

Python: how to kill child process(es) when parent dies?

The child process is started with
subprocess.Popen(arg)
Is there a way to ensure it is killed when parent terminates abnormally? I need this to work both on Windows and Linux. I am aware of this solution for Linux.
Edit:
the requirement of starting a child process with subprocess.Popen(arg) can be relaxed, if a solution exists using a different method of starting a process.
Heh, I was just researching this myself yesterday! Assuming you can't alter the child program:
On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.)
On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess:
import subprocess
import win32api
import win32con
import win32job
hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)
child = subprocess.Popen(...)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)
win32job.AssignProcessToJobObject(hJob, hProcess)
Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that.
One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine.
If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then:
On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.)
On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes:
from ctypes import WinDLL, WinError
from ctypes.wintypes import DWORD, BOOL, HANDLE
# Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
SYNCHRONIZE = 0x00100000
kernel32 = WinDLL("kernel32.dll")
kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
kernel32.OpenProcess.restype = HANDLE
parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
# Block until parent exits
os.waitpid(parent_handle, 0)
os._exit(0)
This avoids any of the possible issues with job objects that I mentioned.
If you want to be really, really sure, then you can combine all these solutions.
Hope that helps!
The Popen object offers the terminate and kill methods.
https://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
These send the SIGTERM and SIGKILL signals for you.
You can do something akin to the below:
from subprocess import Popen
p = None
try:
p = Popen(arg)
# some code here
except Exception as ex:
print 'Parent program has exited with the below error:\n{0}'.format(ex)
if p:
p.terminate()
UPDATE:
You are correct--the above code will not protect against hard-crashing or someone killing your process. In that case you can try wrapping the child process in a class and employ a polling model to watch the parent process.
Be aware psutil is non-standard.
import os
import psutil
from multiprocessing import Process
from time import sleep
class MyProcessAbstraction(object):
def __init__(self, parent_pid, command):
"""
#type parent_pid: int
#type command: str
"""
self._child = None
self._cmd = command
self._parent = psutil.Process(pid=parent_pid)
def run_child(self):
"""
Start a child process by running self._cmd.
Wait until the parent process (self._parent) has died, then kill the
child.
"""
print '---- Running command: "%s" ----' % self._cmd
self._child = psutil.Popen(self._cmd)
try:
while self._parent.status == psutil.STATUS_RUNNING:
sleep(1)
except psutil.NoSuchProcess:
pass
finally:
print '---- Terminating child PID %s ----' % self._child.pid
self._child.terminate()
if __name__ == "__main__":
parent = os.getpid()
child = MyProcessAbstraction(parent, 'ping -t localhost')
child_proc = Process(target=child.run_child)
child_proc.daemon = True
child_proc.start()
print '---- Try killing PID: %s ----' % parent
while True:
sleep(1)
In this example I run 'ping -t localhost' b/c that will run forever. If you kill the parent process, the child process (the ping command) will also be killed.
Since, from what I can tell, the PR_SET_PDEATHSIG solution can result in a deadlock when any threads are running in the parent process, I didn't want to use that and figured out another way. I created a separate auto-terminate process that detects when its parent process is done and kills the other subprocess that is its target.
To accomplish this, you need to pip install psutil, and then write code similar to the following:
def start_auto_cleanup_subprocess(target_pid):
cleanup_script = f"""
import os
import psutil
import signal
from time import sleep
try:
# Block until stdin is closed which means the parent process
# has terminated.
input()
except Exception:
# Should be an EOFError, but if any other exception happens,
# assume we should respond in the same way.
pass
if not psutil.pid_exists({target_pid}):
# Target process has already exited, so nothing to do.
exit()
os.kill({target_pid}, signal.SIGTERM)
for count in range(10):
if not psutil.pid_exists({target_pid}):
# Target process no longer running.
exit()
sleep(1)
os.kill({target_pid}, signal.SIGKILL)
# Don't bother waiting to see if this works since if it doesn't,
# there is nothing else we can do.
"""
return Popen(
[
sys.executable, # Python executable
'-c', cleanup_script
],
stdin=subprocess.PIPE
)
This is similar to https://stackoverflow.com/a/23436111/396373 that I had failed to notice, but I think the way that I came up with is easier for me to use because the process that is the target of cleanup is created directly by the parent. Also note that it is not necessary to poll the status of the parent, though it is still necessary to use psutil and to poll the status of the target subprocess during the termination sequence if you want to try, as in this example, to terminate, monitor, and then kill if terminate didn't work expeditiously.
Hook exit of your process using SetConsoleCtrlHandler, and kill subprocess. I think I do a bit of a overkill there, but it works :)
import psutil, os
def kill_proc_tree(pid, including_parent=True):
parent = psutil.Process(pid)
children = parent.children(recursive=True)
for child in children:
child.kill()
gone, still_alive = psutil.wait_procs(children, timeout=5)
if including_parent:
parent.kill()
parent.wait(5)
def func(x):
print("killed")
if anotherproc:
kill_proc_tree(anotherproc.pid)
kill_proc_tree(os.getpid())
import win32api,shlex
win32api.SetConsoleCtrlHandler(func, True)
PROCESSTORUN="your process"
anotherproc=None
cmdline=f"/c start /wait \"{PROCESSTORUN}\" "
anotherproc=subprocess.Popen(executable='C:\\Windows\\system32\\cmd.EXE', args=shlex.split(cmdline,posix="false"))
...
run program
...
Took kill_proc_tree from:
subprocess: deleting child processes in Windows

python-daemon context fails to start when a stale PID file is present

I'm using python-daemon, and having the problem that when I kill -9 a process, it leaves a pidfile behind (ok) and the next time I run my program it doesn't work unless I have already removed the pidfile by hand (not ok).
I catch all exceptions in order that context.close() is called before terminating -- when this happens (e.g. on a kill) the /var/run/mydaemon.pid* files are removed and a subsequent daemon run succeeds. However, when using SIGKILL (kill -9), I don't have the chance to call context.close(), and the /var/run files remain. In this instance, the next time I run my program it does not start successfully -- the original process returns, but the daemonized process blocks at context.open().
It seems like python-daemon ought to be noticing that there is a pidfile for a process that no longer exists, and clearing it out, but that isn't happening. Am I supposed to be doing this by hand?
Note: I'm not using with because this code runs on Python 2.4
from daemon import DaemonContext
from daemon.pidlockfile import PIDLockFile
context = DaemonContext(pidfile = PIDLockFile("/var/run/mydaemon.pid"))
context.open()
try:
retry_main_loop()
except Exception, e:
pass
context.close()
If you are running linux, and process level locks are acceptable, read on.
We try to acquire the lock. If it fails, check if the lock is acquired by a running process. If no, break the lock and continue.
from lockfile.pidlockfile import PIDLockFile
from lockfile import AlreadyLocked
pidfile = PIDLockFile("/var/run/mydaemon.pid", timeout=-1)
try:
pidfile.acquire()
except AlreadyLocked:
try:
os.kill(pidfile.read_pid(), 0)
print 'Process already running!'
exit(1)
except OSError: #No process with locked PID
pidfile.break_lock()
#pidfile can now be used to create DaemonContext
Edit: Looks like PIDLockFile is available only on lockfile >= 0.9
With the script provided here
the pid file remains on kill -9 as you say, but the script also cleans up properly on a restart.

Indefinite daemonized process spawning in Python

I'm trying to build a Python daemon that launches other fully independent processes.
The general idea is for a given shell command, poll every few seconds and ensure that exactly k instances of the command are running. We keep a directory of pidfiles, and when we poll we remove pidfiles whose pids are no longer running and start up (and make pidfiles for) however many processes we need to get to k of them.
The child processes also need to be fully independent, so that if the parent process dies the children won't be killed. From what I've read, it seems there is no way to do this with the subprocess module. To this end, I used the snippet mentioned here:
http://code.activestate.com/recipes/66012-fork-a-daemon-process-on-unix/
I made a couple necessary modifications (you'll see the lines commented out in the attached snippet):
The original parent process can't exit because we need the launcher daemon to persist indefinitely.
The child processes need to start with the same cwd as the parent.
Here's my spawn fn and a test:
import os
import sys
import subprocess
import time
def spawn(cmd, child_cwd):
"""
do the UNIX double-fork magic, see Stevens' "Advanced
Programming in the UNIX Environment" for details (ISBN 0201563177)
http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
"""
try:
pid = os.fork()
if pid > 0:
# exit first parent
#sys.exit(0) # parent daemon needs to stay alive to launch more in the future
return
except OSError, e:
sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# decouple from parent environment
#os.chdir("/") # we want the children processes to
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
sys.exit(1)
# redirect standard file descriptors
sys.stdout.flush()
sys.stderr.flush()
si = file('/dev/null', 'r')
so = file('/dev/null', 'a+')
se = file('/dev/null', 'a+', 0)
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
pid = subprocess.Popen(cmd, cwd=child_cwd, shell=True).pid
# write pidfile
with open('pids/%s.pid' % pid, 'w') as f: f.write(str(pid))
sys.exit(1)
def mkdir_if_none(path):
if not os.access(path, os.R_OK):
os.mkdir(path)
if __name__ == '__main__':
try:
cmd = sys.argv[1]
num = int(sys.argv[2])
except:
print 'Usage: %s <cmd> <num procs>' % __file__
sys.exit(1)
mkdir_if_none('pids')
mkdir_if_none('test_cwd')
for i in xrange(num):
print 'spawning %d...'%i
spawn(cmd, 'test_cwd')
time.sleep(0.01) # give the system some breathing room
In this situation, things seem to work fine, and the child processes persist even when the parent is killed. However, I'm still running into a spawn limit on the original parent. After ~650 spawns (not concurrently, the children have finished) the parent process chokes with the error:
spawning 650...
fork #2 failed: 35 (Resource temporarily unavailable)
Is there any way to rewrite my spawn function so that I can spawn these independent child processes indefinitely? Thanks!
Thanks to your list of processes I'm willing to say that this is because you have hit one of a number of fundamental limitations:
rlimit nproc maximum number of processes a given user is allowed to execute -- see setrlimit(2), the bash(1) ulimit built-in, and /etc/security/limits.conf for details on per-user process limits.
rlimit nofile maximum number of file descriptors a given process is allowed to have open at once. (Each new process probably creates three new pipes in the parent, for the child's stdin, stdout, and stderr descriptors.)
System-wide maximum number of processes; see /proc/sys/kernel/pid_max.
System-wide maximum number of open files; see /proc/sys/fs/file-max.
Because you're not reaping your dead children, many of these resources are held open longer than they should. Your second children are being properly handled by init(8) -- their parent is dead, so they are re-parented to init(8), and init(8) will clean up after them (wait(2)) when they die.
However, your program is responsible for cleaning up after the first set of children. C programs typically install a signal(7) handler for SIGCHLD that calls wait(2) or waitpid(2) to reap the children's exit status and thus remove its entries from the kernel's memory.
But signal handling in a script is a bit annoying. If you can set the SIGCHLD signal disposition to SIG_IGN explicitly, the kernel will know that you are not interested in the exit status and will reap the children for you_.
Try adding:
import signal
signal.signal(signal.SIGCHLD, signal.SIG_IGN)
near the top of your program.
Note that I don't know what this does for Subprocess. It might not be pleased. If that is the case, then you'll need to install a signal handler to call wait(2) for you.
I'm slightly modified your code and was able to run 5000 processes without any issues. So I agree with #sarnold that you hit some fundamental limitation. My modifications are:
proc = subprocess.Popen(cmd, cwd=child_cwd, shell=True, close_fds=True)
pid = proc.pid
# write pidfile
with open('pids/%s.pid' % pid, 'w') as f: f.write(str(pid))
proc.wait()
sys.exit(1)

Opening a process with Popen and getting the PID

I'm working on a nifty little function:
def startProcess(name, path):
"""
Starts a process in the background and writes a PID file
returns integer: pid
"""
# Check if the process is already running
status, pid = processStatus(name)
if status == RUNNING:
raise AlreadyStartedError(pid)
# Start process
process = subprocess.Popen(path + ' > /dev/null 2> /dev/null &', shell=True)
# Write PID file
pidfilename = os.path.join(PIDPATH, name + '.pid')
pidfile = open(pidfilename, 'w')
pidfile.write(str(process.pid))
pidfile.close()
return process.pid
The problem is that process.pid isn't the correct PID. It seems it's always 1 lower than the correct PID. For instance, it says the process started at 31729, but ps says it's running at 31730. Every time I've tried it's off by 1. I'm guessing the PID it returns is the PID of the current process, not the started one, and the new process gets the 'next' pid which is 1 higher. If this is the case, I can't just rely on returning process.pid + 1 since I have no guarantee that it'll always be correct.
Why doesn't process.pid return the PID of the new process, and how can I achieve the behaviour I'm after?
From the documentation at http://docs.python.org/library/subprocess.html:
Popen.pid The process ID of the child process.
Note that if you set the shell argument to True, this is the process
ID of the spawned shell.
If shell is false, it should behave as you expect, I think.
If you were relying on shell being True for resolving executable paths using the PATH environment variable, you can accomplish the same thing using shutil.which instead, then pass the absolute path to Popen instead. (As an aside, if you are using Python 3.5 or newer, you should be using subprocess.run rather than Popen.

Categories