I am trying to start and later kill a process that requires sudo via a python-script. Even if the python script itself is run with sudo and kill() does not give any permission errors the process is not killed (and never receives SIGKILL).
Investigating this, i found out that Popen() returns the the process id of the sudo process, i assume at least, rather than the process i want to control. So when i correctly kill it later the underlying process keeps running. (Although if i kill the python program before killing the sudo process in python code the underlying process is also killed, so i guess there must be a way to do this manually, too).
I know it might be an option to use pgrep or pidof to search for the correct process, but as the processes name might not be unique it seems unnescessarly error prone (it might also occur that a process with the same name is started around the same time, so taking the latest one might not help).
Is there any solution to get reliably the pid of the underlying process started with sudo in python?
Using Python3.
My code for conducting the tests, taken slightly modified from https://stackoverflow.com/a/43417395/1171541:
import subprocess, time
cmd = ["sudo", "testscript.sh"]
def myfunction(action, process=None):
if action === "start":
process = subprocess.Popen(cmd)
return process
if action === "stop"
# kill() and send_signal(signal.SIGTERM) do not work either
process.terminate()
process = myfunction("start")
time.sleep(5)
myfunction("stop", process);
Okay, i can answer my own question here (which i found on https://izziswift.com/how-to-terminate-a-python-subprocess-launched-with-shelltrue/). The trick was to open the process with:
subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
and then kill it:
os.killpg(os.getpgid(process.pid), signal.SIGTERM)
This time i use a shell to open and use the os to kill all the processes in the process group.
Related
OS: Windows 10
Python: 3.5.2
I am trying to open calc.exe do some actions and than close it.
Here is my code sample
import subprocess, os, time
p = subprocess.Popen('calc.exe')
#Some actions
time.sleep(2)
p.kill()
So this is not working for calc.exe, it just opens the calculator, but does not close it, But same code is working fine for "notepad.exe".
I am guessing that there is a bug in subprocess lib for process kill method. so the notepad.exe process name in task manager is notepad.exe, but the calc.exe process name is calculator.exe, so I am guessing it is trying to kill by name and do not find it.
There's no bug in subprocess.kill. If you're really worried about that, just check the source, which is linked from the docs. The kill method just calls send_signal, which just calls os.kill unless the process is already done, and you can see the Windows implementation for that function. In short: subprocess.Process.kill doesn't care what name the process has in the kernel's process table (or the Task Manager); it remembers the PID (process ID) of the process it started, and kills it that way.
The most likely problem is that, like many Windows apps, calc.exe has some special "single instance" code: when you launch it, if there's already a copy of calc.exe running in your session, it just tells that copy to come to the foreground (and open a window, if it doesn't have one), and then exits. So, by the time you try to kill it 2 seconds later, the process has already exited.
And if the actual running process is calculator.exe, that means calc.exe is just a launcher for the real program, so it always tells calculator.exe to come to the foreground, launching it if necessary, and then exits.
So, how can you kill the new calculator you started? Well, you can't, because you didn't start a new one. You can kill all calc.exe and/or calculator.exe processes (the easiest way to do this is with a third-party library like psutil—see the examples on filtering and then kill the process once you've found it), but that will kill any existing calculator process you had open before running your program, not just the new one you started. Since calc.exe makes it impossible to tell if you've started a new process or not, there's really no way around that.
This is one way to kill it, but it will close every open calculator.
It calls a no window command prompt and gives the command to close the Calculator.exe process.
import subprocess, os, time
p = subprocess.Popen('calc.exe')
print(p)
#Some actions
time.sleep(2)
CREATE_NO_WINDOW = 0x08000000
subprocess.call('taskkill /F /IM Calculator.exe', creationflags=CREATE_NO_WINDOW)
I'm trying to figure out how to properly close out my script that's supposed to start up a Django server running in a docker container (boot2docker, on Mac OS X). Here's the pertinent code block:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
subprocess.call('./startdockerdjango.sh', shell=True)
except KeyboardInterrupt:
return
Where startdockerdjango.sh takes care of setting the environment variables that docker needs and starts the server up. The script overall is supposed to know whether to do first-time setup and initialization or simply start the container and server; catching the CalledProcessError means that first time setup was already done and that the container and server can just be started up. The startup works fine, but when a user presses Ctrl-C to stop the server, the server stops normally but then apparently the process that started the server is still going. If I press return, then I can go back to the normal terminal command prompt. If I do any sort of shell command, like ls, then it will be carried out and then I can return to the terminal. I want to change the code so that, if a user presses Ctrl-C, then the server and the container that the server is running in will stop normally and then, afterward, stop the process and have the whole script exit. How can this be done? I don't want to just kill or terminate the process upon KeyboardInterrupt, since then the server and container won't be able to stop normally but will be killed off abruptly.
UPDATE:
I recently tried the following according to Padraic Cunningham's comment:
try:
init_code = subprocess.check_output('./initdocker.sh', shell=True)
subprocess.call('./startdockerdjango.sh', shell=True)
except subprocess.CalledProcessError:
try:
startproc = subprocess.Popen('./startdockerdjango.sh')
except KeyboardInterrupt:
startproc.send_signal(SIGTERM)
startproc.wait()
return
This was my attempt to send a term to the server to shut down gracefully and then use wait() to wait for the process (startproc) to complete. This, however, results in just having the container and server end abruptly, something that I was trying to prevent. The same thing happens if I try SIGINT instead. What, if anything, am I doing wrong in this second approach? I still want the same overall thing as before, which is having one single Ctrl-C end the container and server, then exit the script.
You might want to create the process using Popen. It will give you a little more control on how you manage the child process.
env = {"MY_ENV_VAR": "some value"}
proc = subprocess.Popen("./dockerdjango.sh", env=env)
try:
proc.wait()
except KeyboardInterupt:
proc.terminate() # on linux this gives the a chance to clean up,
# or even ignore the signal entirely
# use proc.send_signal(...) and the module signal to send other signals.
# or proc.kill() if you wish to be kill the process immediately.
If you set the environment variables in python it will also result in less child processes that need to be killed.
In the end, it wasn't worth the effort to have the script know to either do first-time initialization or server+container startup. Instead, the script will just try first-time setup and then will tell the user to do docker-compose up after successful setup. This is a better solution for my specific situation than trying to figure out how to have Ctrl-C properly shut down the server and then exit the script.
To Reset django server subprocess execute in your terminal:
$ sudo lsof -i tcp:8080
$ sudo lsof -i tcp:8080|awk '{print $2}'|cut -d/ -f 1|xargs kill
I want to submit my long running Python job using ampersand. I'm going to kick this process off from an interactive Python program by using a sub process call it.
How would I keep track of the submitted job programmatically in case I want to end the job from a menu option?
Example of interactive program:
Main Menu
1. Submit long running job &
2. End long running job
If you're using python's subprocess module, you don't really need to background it again with & do you? You can just keep your Popen object around to track the job, and it will run while the other python process continues.
If your "outer" python process is going to terminate what sort of track do you need to keep? Would pgrep/pkill be suitable? Alternately, you could have the long running job log its PID, often under /var/run somewhere, and use that to track if the process is still alive and/or signal it.
You could use Unix signals. Here we capture SIGUSR1 to tell the process to communicate some info to STDOUT.
#!/usr/bin/env python
import signal
import sys
def signal_handler(signal, frame):
print('Caught SIGUSR1!')
print("Current job status is " + get_job_status())
signal.signal(signal.SIGUSR1, signal_handler)
and then from the shell
kill <pid> --signal SIGUSR1
I am running multiple copies of the same python script on an Amazon EC2 Ubuntu instance. Each copy in turn launches the same child Python script using the solution proposed here
From time to time some of these child processes die. subprocess.check_output throws an exception and returns the error code -9. I ran the child process directly from the prompt and after running for some time, the process dies with a not-so-detailed message Killed.
Questions:
What does -9 mean?
How can I find out more about what went wrong? Specifically, my suspicion is that it might be caused by the machine getting overloaded by the several copies of the same script running at the same time. At the same time, the specific child process that I ran directly appears to be dying every time it's launched, directly or not, and more or less at the same moment (i.e. after processing more or less the same amount of input data). Python is not producing any error messages.
Assuming I have no bugs in the Python code, what can I do to try to prevent the crashes?
check_output() accumulates output from the subprocess in memory. If the process generates enough output it might be killed by oom killer due to the large RAM consumption.
If you don't need the output, you could use check_call() instead and discard the output:
import os
from subprocess import check_call, STDOUT
DEVNULL = open(os.devnull, "r+b")
check_call([command], stdout=DEVNULL, stderr=STDOUT)
-9 means kill signal that is not catchable or ignorable, or just quit immediately.
For example if you're trying to kill a process you could enter in your terminal:
ps aux | grep processname
or just this to get a list of all processes: ps aux
Once you have the pid of the process you want to terminate, you'd type kill -9 followed by the pid:
kill -9 1234
My memory is a little foggy when it comes to logs, but I'd cat around in /var/log/ and see if you find anything, or dmesg.
As far as preventing crashes in your Python code, have you tried any exception handling?
Exceptions in Python
I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these.
When the image-viewing process exists, I want to remove the temporary images.
I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following:
p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"])
p.communicate()
os.unlink("/example/image1.jpg")
os.unlink("/example/image2.jpg")
..as this blocks the main thread, nor could I check for the pid exiting in a thread etc
The only solution I can think of means I have to use shell=True, which I would rather avoid:
import pipes
import subprocess
cmd = ['imgviewer']
cmd.append("/example/image2.jpg")
for x in cleanup:
cmd.extend(["&&", "rm", pipes.quote(x)])
cmdstr = " ".join(cmd)
subprocess.Popen(cmdstr, shell = True)
This works, but is hardly elegant..
Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.
If you're on any variant of Unix, you could fork your Python program, and have the parent process go on with its life while the child process daemonized, runs the viewer (doesn't matter in the least if that blocks the child process, which has no other job in life anyway;-), and cleans up after it. The original Python process may or may not exist at this point, but the "waiting to clean up" child process of course will (some process or other has to do the clean-up, after all, right?-).
If you're on Windows, or need cross-platform code, then have your Python program "spawn" (i.e., just start with subprocess, then go on with life) another (much smaller) one, which is the one tasked to run the viewer (blocking, who cares) and then do the clean-up. (If on Unix, even in this case you may want to daemonize, otherwise the child process might go away when the parent process does).