Popen not responding to kill - python

Environment: Raspberry Pi Wheezy
I have a python program that uses Popen to call another python program
from subprocess import *
oJob = Popen('sudo python mypgm.py',shell=True)
Another menu option is supposed to end the job immediately
oJob.kill()
but the job is still running??

When you add the option shell=True, python launches a shell and the shell in turn launches the process python mymgm.py. You are killing the shell process here which doesn't kill its own child that runs mymgm.py.
To ensure, that child process gets killed on oJob.kill, you need to group them all under one process group and make shell process, the group leader.
The code is,
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups
When you send SIGTERM signal to the shell process, it will kill all its child process as well.

You need to add a creation flag arg
oJob = Popen('sudo python mypgm.py',shell=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
source
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
EDIT I agree with the comment on how to import stuff and why you are getting something is undefined. Also the other answer seems to be on the right track getting the pid
import subprocess as sub
oJob = sub.Popen('sudo python mypgm.py', creationflags = sub.CREATE_NEW_PROCESS_GROUP)
oJob.kill()
Warning Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:

Related

Python, close subprocess with different SID when script ends

I have a python script that launches subprocesses using subprocess.Popen. The subprocess then launches an external command (in my case, it plays an mp3). The python script needs to be able to interrupt the subprocesses, so I used the method described here which gives the subprocess its own session ID. Unfortunately, when I close the python script now, the subprocess will continue to run.
How can I make sure a subprocess launched from a script, but given a different session ID still closes when the python script stops?
Is there any way to kill a Thread in Python?
and make sure you use it as thread
import threading
from subprocess import call
def thread_second():
call(["python", "secondscript.py"])
processThread = threading.Thread(target=thread_second) # <- note extra ','
processThread.start()
print 'the file is run in the background'
TL;DR Change the Popen params: Split up the Popen cmd (ex. "list -l" -> ["list", "-l"]) and use Shell=False
~~~
The best solution I've seen so far was just not to use shell=True as an argument for Popen, this worked because I didn't really need shell=True, I was simply using it because Popen wouldn't recognize my cmd string and I was too lazy too split it into a list of args. This caused me a lot of other problems (ex. using .terminate() becomes a lot more complicated while using shell and needs to have its session id, see here)
Simply splitting the cmd from a string to a list of args lets me use Popen.terminate() without having to give it its own session id, by not having a separate session id the process will be closed when the python script is stopped

python create subprocess (newbie)

I'm new to python, so here's what I'm looking to get done.
I would like to use python to manage some of my gameservers and start/stop them. For this I would like to run every gameserver in a own process.
What's the best way to create processes using python, so these processes can continue even if the main application is stopped?
To start a server I only need to execute shell code.
How can I get access after stopping my main application and restarting it to these processes?
I'm not sure if I understand the question completely, but maybe something like this?
Run process:
import subprocess
subprocess.Popen(['/path/gameserver']) #keeps running
And in another script you can use 'ps -A' to find the pid and kill (or restart) it:
import subprocess, signal
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'gameserver' in line:
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)
Check the subprocess module. There is a function called call. See here.
You may need to set the process to not be a daemon process.

Launch a completely independent process

I want to initiate a process from my python script main.py. Specifically, I want to run the below command:
`nohup python ./myfile.py &`
and the file myfile.py should continue running, even after the main.py script exits.
I also wish to get the pid of the new process.
I tried:
os.spawnl*
os.exec*
subprocess.Popen
and all are terminating the myfile.py when the main.py script exits.
Update: Can I use os.startfile with xdg-open? Is it the right approach?
Example
a = subprocess.Popen([sys.executable, "nohup /usr/bin/python25 /long_process.py &"],\
stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
print a.pid
If I check ps aux | grep long_process, there is no process running.
long_process.py which keeps on printing some text: no exit.
Am I doing anything wrong here?
You open your long-running process and keep a pipe to it. So you expect to talk to it. When yor launcher script exits, you can no longer talk to it. The long-running process receives a SIGPIPE and exits.
The following just worked for me (Linux, Python 2.7).
Create a long-running executable:
$ echo "sleep 100" > ~/tmp/sleeper.sh
Run Python REPL:
$ python
>>>
import subprocess
import os
p = subprocess.Popen(['/bin/sh', os.path.expanduser('~/tmp/sleeper.sh')])
# look ma, no pipes!
print p.pid
# prints 29893
Exit the REPL and see the process still running:
>>> ^D
$ ps ax | grep sleeper
29893 pts/0 S 0:00 /bin/sh .../tmp/sleeper.sh
29917 pts/0 S+ 0:00 grep --color=auto sleeper
If you want to first communicate to the started process and then leave it alone to run further, you have a few options:
Handle SIGPIPE in your long-running process, do not die on it. Live without stdin after the launcher process exits.
Pass whatever you wanted using arguments, environment, or a temporary file.
If you want bidirectional communication, consider using a named pipe (man mkfifo) or a socket, or writing a proper server.
Make the long-running process fork after the initial bi-direcional communication phase is done.
You can use os.fork().
import os
pid=os.fork()
if pid==0: # new process
os.system("nohup python ./myfile.py &")
exit()
# parent process continues
I could not see any process running.
You don't see any process running because the child python process exits immediately. The Popen arguments are incorrect as user4815162342 says in the comment.
To launch a completely independent process, you could use python-daemon package or use systemd/supervisord/etc:
#!/usr/bin/python25
import daemon
from long_process import main
with daemon.DaemonContext():
main()
Though it might be enough in your case, to start the child with correct Popen arguments:
with open(os.devnull, 'r+b', 0) as DEVNULL:
p = Popen(['/usr/bin/python25', '/path/to/long_process.py'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT, close_fds=True)
time.sleep(1) # give it a second to launch
if p.poll(): # the process already finished and it has nonzero exit code
sys.exit(p.returncode)
If the child process doesn't require python2.5 then you could use sys.executable instead (to use the same Python version as the parent).
Note: the code closes DEVNULL in the parent without waiting for the child process to finish (it has no effect on the child).

How to execute a shell script in the background from a Python script

I am working on executing the shell script from Python and so far it is working fine. But I am stuck on one thing.
In my Unix machine I am executing one command in the background by using & like this. This command will start my app server -
david#machineA:/opt/kml$ /opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &
Now I need to execute the same thing from my Python script but as soon as it execute my command it never goes to else block and never prints out execute_steps::Successful, it just hangs over there.
proc = subprocess.Popen("/opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
if proc.returncode != 0:
logger.error("execute_steps::Errors while executing the shell script: %s" % stderr)
sleep(0.05) # delay for 50 ms
else:
logger.info("execute_steps::Successful: %s" % stdout)
Anything wrong I am doing here? I want to print out execute_steps::Successful after executing the shell script in the background.
All other command works fine but only the command which I am trying to run in background doesn't work fine.
There's a couple things going on here.
First, you're launching a shell in the background, and then telling that shell to run the program in the background. I don't know why you think you need both, but let's ignore that for now. In fact, by adding executable='/bin/bash' on top of shell=True, you're actually trying to run a shell to run a shell to run the program in the background, although that doesn't actually quite work.*
Second, you're using PIPE for the process's output and error, but then not reading them. This can cause the child to deadlock. If you don't want the output, use DEVNULL, not PIPE. If you want the output to process yourself, use proc.communicate().**, or use a higher-level function like check_output. If you just want it to intermingle with your own output, just leave those arguments off.
* If you're using the shell because kml_http is a non-executable script that has to be run by /bin/bash, then don't use shell=True for that, or executable, just make make /bin/bash the first argument in the command line, and /opt/kml/bin/kml_http the second. But this doesn't seem likely; why would you install something non-executable into a bin directory?
** Or you can read it explicitly from proc.stdout and proc.stderr, but that gets more complicated.
At any rate, the whole point of executing something in the background is that it keeps running in the background, and your script keeps running in the foreground. So, you're checking its returncode before it's finished, and then moving on to whatever's next in your code, and never coming back again.
It seems like you want to wait for it to be finished. In that case, don't run it in the background—use proc.wait, or just use subprocess.call() instead of creating a Popen object. And don't use & either, of course. While we're at it, don't use the shell, either:
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
Now, you won't get to that if statement until kml_http finishes running.
If you want to wait for it to be finished, but at the same time keep doing other stuff, then you're trying to do two things at once in your program, which means you need a thread to do the waiting:
def run_kml_http():
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
t = threading.Thread(target=run_kml_http)
t.start()
# Now you can do other stuff in the main thread, and the background thread will
# wait around until kml_http is finished and execute the `if` statement whenever
# that happens
You're using stderr=PIPE, stdout=PIPE which means that rather than letting the stdin and stdout of the child process be forwarded to the current process' standard output and error streams, they are being redirected to a pipe which you must read from in your python process (via proc.stdout and proc.stderr.
To "background" a process, simply omit the usage of PIPE:
#!/usr/bin/python
from subprocess import Popen
from time import sleep
proc = Popen(
['/bin/bash', '-c', 'for i in {0..10}; do echo "BASH: $i"; sleep 1; done'])
for x in range(10):
print "PYTHON: {0}".format(x)
sleep(1)
proc.wait()
which will show the process being "backgrounded".

How to kill a process been created by subprocess in python?

Under Linux Ubuntu operating system, I run the test.py scrip which contain a GObject loop using subprocess by:
subprocess.call(["test.py"])
Now, this test.py will creat process. Is there a way to kill this process in Python?
Note: I don't know the process ID.
I am sorry if I didn't explain my problem very clearly as I am new to this forms and new to python in general.
I would suggest not to use subprocess.call but construct a Popen object and use its API: http://docs.python.org/2/library/subprocess.html#popen-objects
In particular:
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.terminate
HTH!
subprocess.call() is just subprocess.Popen().wait():
from subprocess import Popen
from threading import Timer
p = Popen(["command", "arg1"])
print(p.pid) # you can save pid to a file to use it outside Python
# do something else..
# now ask the command to exit
p.terminate()
terminator = Timer(5, p.kill) # give it 5 seconds to exit; then kill it
terminator.start()
p.wait()
terminator.cancel() # the child process exited, cancel the hit
subprocess.call waits for the process to be completed and returns the exit code (integer) value , hence there is no way of knowing the process id of the child process. YOu should consider using subprocess.Popen which forks() child process.

Categories