I want to initiate a process from my python script main.py. Specifically, I want to run the below command:
`nohup python ./myfile.py &`
and the file myfile.py should continue running, even after the main.py script exits.
I also wish to get the pid of the new process.
I tried:
os.spawnl*
os.exec*
subprocess.Popen
and all are terminating the myfile.py when the main.py script exits.
Update: Can I use os.startfile with xdg-open? Is it the right approach?
Example
a = subprocess.Popen([sys.executable, "nohup /usr/bin/python25 /long_process.py &"],\
stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
print a.pid
If I check ps aux | grep long_process, there is no process running.
long_process.py which keeps on printing some text: no exit.
Am I doing anything wrong here?
You open your long-running process and keep a pipe to it. So you expect to talk to it. When yor launcher script exits, you can no longer talk to it. The long-running process receives a SIGPIPE and exits.
The following just worked for me (Linux, Python 2.7).
Create a long-running executable:
$ echo "sleep 100" > ~/tmp/sleeper.sh
Run Python REPL:
$ python
>>>
import subprocess
import os
p = subprocess.Popen(['/bin/sh', os.path.expanduser('~/tmp/sleeper.sh')])
# look ma, no pipes!
print p.pid
# prints 29893
Exit the REPL and see the process still running:
>>> ^D
$ ps ax | grep sleeper
29893 pts/0 S 0:00 /bin/sh .../tmp/sleeper.sh
29917 pts/0 S+ 0:00 grep --color=auto sleeper
If you want to first communicate to the started process and then leave it alone to run further, you have a few options:
Handle SIGPIPE in your long-running process, do not die on it. Live without stdin after the launcher process exits.
Pass whatever you wanted using arguments, environment, or a temporary file.
If you want bidirectional communication, consider using a named pipe (man mkfifo) or a socket, or writing a proper server.
Make the long-running process fork after the initial bi-direcional communication phase is done.
You can use os.fork().
import os
pid=os.fork()
if pid==0: # new process
os.system("nohup python ./myfile.py &")
exit()
# parent process continues
I could not see any process running.
You don't see any process running because the child python process exits immediately. The Popen arguments are incorrect as user4815162342 says in the comment.
To launch a completely independent process, you could use python-daemon package or use systemd/supervisord/etc:
#!/usr/bin/python25
import daemon
from long_process import main
with daemon.DaemonContext():
main()
Though it might be enough in your case, to start the child with correct Popen arguments:
with open(os.devnull, 'r+b', 0) as DEVNULL:
p = Popen(['/usr/bin/python25', '/path/to/long_process.py'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT, close_fds=True)
time.sleep(1) # give it a second to launch
if p.poll(): # the process already finished and it has nonzero exit code
sys.exit(p.returncode)
If the child process doesn't require python2.5 then you could use sys.executable instead (to use the same Python version as the parent).
Note: the code closes DEVNULL in the parent without waiting for the child process to finish (it has no effect on the child).
Related
In python 2.7 on Ubuntu 14.04, I launch a process like this:
bag_process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for i in range(5):
print "Countdown: {}".format(5 - i - 1)
time.sleep(1)
print "Sending SIGINT to PID {}".format(bag_process.pid)
bag_process.send_signal(signal.SIGINT)
(bag_out, bag_err) = bag_process.communicate()
The program hangs on the communicate() line. When I open another terminal, I run ps -ef | grep ### to find the pid of the subprocess, and I see it's <defunct>.
Why is the child program becoming defunct, and the parent program hanging on communicate()? Provided that the child truly exits after receiving SIGINT, how can I make the parent program reliably handle that without hanging?
The problem was: Don't kill a process like this:
bag_process.send_signal(signal.SIGINT)
Instead, kill the process and all of its sub-processes like this:
parent = psutil.Process(bag_process.pid)
for child in parent.get_children(recursive=True):
child.send_signal(signal.SIGINT)
bag_process.send_signal(signal.SIGINT)
I have a Flask application using python3. Sometimes it create daemon process to run script, then I want to kill daemon when timeout (use signal.SIGINT).
However, some processes which created by os.system (for example, os.system('git clone xxx')) are still running after daemon was killed.
so what should I do? Thanks all!
In order to be able to kill a process you need its process id (usually referred to as a pid). os.system doesn't give you that, simply returning the value of the subprocess's return code.
The newer subprocess module gives you much more control, at the expense of somewhat more complexity. In particular it allows you to wait for the process to finish, with a timeout if required, and gives you access to the subprocess's pid. While I am not an expert in its use, this seems to
work. Note that this code needs Python 3.3 or better to use the timeout argument to the Popen.wait call.
import subprocess
process = subprocess.Popen(['git', 'clone', 'https://github.com/username/reponame'])
try:
print('Running in process', process.pid)
process.wait(timeout=10)
except subprocess.TimeoutExpired:
print('Timed out - killing', process.pid)
process.kill()
print("Done")
The following command on the command line will show you all the running instances of python.
$ ps aux | grep -i python
username 6488 0.0 0.0 2434840 712 s003 R+ 1:41PM 0:00.00 python
The first number, 6488, is the PID, process identifier. Look through the output of the command on your machine to find the PID of the process you want to kill.
You can run another command to kill the correct process.
$ kill 6488
You might need to use sudo with this command. Be careful though, you don't want to kill the wrong thing or bad stuff could happen!
I'm new to python, so here's what I'm looking to get done.
I would like to use python to manage some of my gameservers and start/stop them. For this I would like to run every gameserver in a own process.
What's the best way to create processes using python, so these processes can continue even if the main application is stopped?
To start a server I only need to execute shell code.
How can I get access after stopping my main application and restarting it to these processes?
I'm not sure if I understand the question completely, but maybe something like this?
Run process:
import subprocess
subprocess.Popen(['/path/gameserver']) #keeps running
And in another script you can use 'ps -A' to find the pid and kill (or restart) it:
import subprocess, signal
p = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'gameserver' in line:
pid = int(line.split(None, 1)[0])
os.kill(pid, signal.SIGKILL)
Check the subprocess module. There is a function called call. See here.
You may need to set the process to not be a daemon process.
Environment: Raspberry Pi Wheezy
I have a python program that uses Popen to call another python program
from subprocess import *
oJob = Popen('sudo python mypgm.py',shell=True)
Another menu option is supposed to end the job immediately
oJob.kill()
but the job is still running??
When you add the option shell=True, python launches a shell and the shell in turn launches the process python mymgm.py. You are killing the shell process here which doesn't kill its own child that runs mymgm.py.
To ensure, that child process gets killed on oJob.kill, you need to group them all under one process group and make shell process, the group leader.
The code is,
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups
When you send SIGTERM signal to the shell process, it will kill all its child process as well.
You need to add a creation flag arg
oJob = Popen('sudo python mypgm.py',shell=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
source
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
EDIT I agree with the comment on how to import stuff and why you are getting something is undefined. Also the other answer seems to be on the right track getting the pid
import subprocess as sub
oJob = sub.Popen('sudo python mypgm.py', creationflags = sub.CREATE_NEW_PROCESS_GROUP)
oJob.kill()
Warning Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:
I am working on executing the shell script from Python and so far it is working fine. But I am stuck on one thing.
In my Unix machine I am executing one command in the background by using & like this. This command will start my app server -
david#machineA:/opt/kml$ /opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &
Now I need to execute the same thing from my Python script but as soon as it execute my command it never goes to else block and never prints out execute_steps::Successful, it just hangs over there.
proc = subprocess.Popen("/opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
if proc.returncode != 0:
logger.error("execute_steps::Errors while executing the shell script: %s" % stderr)
sleep(0.05) # delay for 50 ms
else:
logger.info("execute_steps::Successful: %s" % stdout)
Anything wrong I am doing here? I want to print out execute_steps::Successful after executing the shell script in the background.
All other command works fine but only the command which I am trying to run in background doesn't work fine.
There's a couple things going on here.
First, you're launching a shell in the background, and then telling that shell to run the program in the background. I don't know why you think you need both, but let's ignore that for now. In fact, by adding executable='/bin/bash' on top of shell=True, you're actually trying to run a shell to run a shell to run the program in the background, although that doesn't actually quite work.*
Second, you're using PIPE for the process's output and error, but then not reading them. This can cause the child to deadlock. If you don't want the output, use DEVNULL, not PIPE. If you want the output to process yourself, use proc.communicate().**, or use a higher-level function like check_output. If you just want it to intermingle with your own output, just leave those arguments off.
* If you're using the shell because kml_http is a non-executable script that has to be run by /bin/bash, then don't use shell=True for that, or executable, just make make /bin/bash the first argument in the command line, and /opt/kml/bin/kml_http the second. But this doesn't seem likely; why would you install something non-executable into a bin directory?
** Or you can read it explicitly from proc.stdout and proc.stderr, but that gets more complicated.
At any rate, the whole point of executing something in the background is that it keeps running in the background, and your script keeps running in the foreground. So, you're checking its returncode before it's finished, and then moving on to whatever's next in your code, and never coming back again.
It seems like you want to wait for it to be finished. In that case, don't run it in the background—use proc.wait, or just use subprocess.call() instead of creating a Popen object. And don't use & either, of course. While we're at it, don't use the shell, either:
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
Now, you won't get to that if statement until kml_http finishes running.
If you want to wait for it to be finished, but at the same time keep doing other stuff, then you're trying to do two things at once in your program, which means you need a thread to do the waiting:
def run_kml_http():
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
t = threading.Thread(target=run_kml_http)
t.start()
# Now you can do other stuff in the main thread, and the background thread will
# wait around until kml_http is finished and execute the `if` statement whenever
# that happens
You're using stderr=PIPE, stdout=PIPE which means that rather than letting the stdin and stdout of the child process be forwarded to the current process' standard output and error streams, they are being redirected to a pipe which you must read from in your python process (via proc.stdout and proc.stderr.
To "background" a process, simply omit the usage of PIPE:
#!/usr/bin/python
from subprocess import Popen
from time import sleep
proc = Popen(
['/bin/bash', '-c', 'for i in {0..10}; do echo "BASH: $i"; sleep 1; done'])
for x in range(10):
print "PYTHON: {0}".format(x)
sleep(1)
proc.wait()
which will show the process being "backgrounded".