I'm working on a nifty little function:
def startProcess(name, path):
"""
Starts a process in the background and writes a PID file
returns integer: pid
"""
# Check if the process is already running
status, pid = processStatus(name)
if status == RUNNING:
raise AlreadyStartedError(pid)
# Start process
process = subprocess.Popen(path + ' > /dev/null 2> /dev/null &', shell=True)
# Write PID file
pidfilename = os.path.join(PIDPATH, name + '.pid')
pidfile = open(pidfilename, 'w')
pidfile.write(str(process.pid))
pidfile.close()
return process.pid
The problem is that process.pid isn't the correct PID. It seems it's always 1 lower than the correct PID. For instance, it says the process started at 31729, but ps says it's running at 31730. Every time I've tried it's off by 1. I'm guessing the PID it returns is the PID of the current process, not the started one, and the new process gets the 'next' pid which is 1 higher. If this is the case, I can't just rely on returning process.pid + 1 since I have no guarantee that it'll always be correct.
Why doesn't process.pid return the PID of the new process, and how can I achieve the behaviour I'm after?
From the documentation at http://docs.python.org/library/subprocess.html:
Popen.pid The process ID of the child process.
Note that if you set the shell argument to True, this is the process
ID of the spawned shell.
If shell is false, it should behave as you expect, I think.
If you were relying on shell being True for resolving executable paths using the PATH environment variable, you can accomplish the same thing using shutil.which instead, then pass the absolute path to Popen instead. (As an aside, if you are using Python 3.5 or newer, you should be using subprocess.run rather than Popen.
Related
I have a python script that does this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=False)
theStdin=request.input.encode('utf-8')
(outputhere,errorshere) = p.communicate(input=theStdin)
It works as expected, it waits for the subprocess to finish via p.communicate(). However within the pythonscript.py I want to "fire and forget" a "grandchild" process. I'm currently doing this by overwriting the join function:
class EverLastingProcess(Process):
def join(self, *args, **kwargs):
pass # Overwrites join so that it doesn't block. Otherwise parent waits.
def __del__(self):
pass
And starting it like this:
p = EverLastingProcess(target=nameOfMyFunction, args=(arg1, etc,), daemon=False)
p.start()
This also works fine I just run pythonscript.py in a bash terminal or bash script. Control and a response returns while the child process started by EverLastingProcess keeps going. However, when I run pythonscript.py with Popen running the process as shown above, it looks from timings that the Popen is waiting on the grandchild to finish.
How can I make it so that the Popen only waits on the child process, and not any grandchild processes?
The solution above (using the join method with the shell=True addition) stopped working when we upgraded our Python recently.
There are many references on the internet about the pieces and parts of this, but it took me some doing to come up with a useful solution to the entire problem.
The following solution has been tested in Python 3.9.5 and 3.9.7.
Problem Synopsis
The names of the scripts match those in the code example below.
A top-level program (grandparent.py):
Uses subprocess.run or subprocess.Popen to call a program (parent.py)
Checks return value from parent.py for sanity.
Collects stdout and stderr from the main process 'parent.py'.
Does not want to wait around for the grandchild to complete.
The called program (parent.py)
Might do some stuff first.
Spawns a very long process (the grandchild - "longProcess" in the code below).
Might do a little more work.
Returns its results and exits while the grandchild (longProcess) continues doing what it does.
Solution Synopsis
The important part isn't so much what happens with subprocess. Instead, the method for creating the grandchild/longProcess is the critical part. It is necessary to ensure that the grandchild is truly emancipated from parent.py.
Subprocess only needs to be used in a way that captures output.
The longProcess (grandchild) needs the following to happen:
It should be started using multiprocessing.
It needs multiprocessing's 'daemon' set to False.
It should also be invoked using the double-fork procedure.
In the double-fork, extra work needs to be done to ensure that the process is truly separate from parent.py. Specifically:
Move the execution away from the environment of parent.py.
Use file handling to ensure that the grandchild no longer uses the file handles (stdin, stdout, stderr) inherited from parent.py.
Example Code
grandparent.py - calls parent.py using subprocess.run()
#!/usr/bin/env python3
import subprocess
p = subprocess.run(["/usr/bin/python3", "/path/to/parent.py"], capture_output=True)
## Comment the following if you don't need reassurance
print("The return code is: " + str(p.returncode))
print("The standard out is: ")
print(p.stdout)
print("The standard error is: ")
print(p.stderr)
parent.py - starts the longProcess/grandchild and exits, leaving the grandchild running. After 10 seconds, the grandchild will write timing info to /tmp/timelog.
!/usr/bin/env python3
import time
def longProcess() :
time.sleep(10)
fo = open("/tmp/timelog", "w")
fo.write("I slept! The time now is: " + time.asctime(time.localtime()) + "\n")
fo.close()
import os,sys
def spawnDaemon(func):
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0: # parent process
return
except OSError as e:
print("fork #1 failed. See next. " )
print(e)
sys.exit(1)
# Decouple from the parent environment.
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError as e:
print("fork #2 failed. See next. " )
print(e)
print(1)
# Redirect standard file descriptors.
# Here, they are reassigned to /dev/null, but they could go elsewhere.
sys.stdout.flush()
sys.stderr.flush()
si = open('/dev/null', 'r')
so = open('/dev/null', 'a+')
se = open('/dev/null', 'a+')
os.dup2(si.fileno(), sys.stdin.fileno())
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())
# Run your daemon
func()
# Ensure that the daemon exits when complete
os._exit(os.EX_OK)
import multiprocessing
daemonicGrandchild=multiprocessing.Process(target=spawnDaemon, args=(longProcess,))
daemonicGrandchild.daemon=False
daemonicGrandchild.start()
print("have started the daemon") # This will get captured as stdout by grandparent.py
References
The code above was mainly inspired by the following two resources.
This reference is succinct about the use of the double-fork but does not include the file handling we need in this situation.
This reference contains the needed file handling, but does many other things that we do not need.
Edit: the below stopped working after a Python upgrade, see the accepted answer from Lachele.
Working answer from a colleague, change to shell=True like this:
p = subprocess.Popen(pythonscript.py, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
I've tested and the grandchild subprocesses stay alive after the child processes returns without waiting for them to finish.
I am trying to kill a subprocess via its pid by using subprocess.call() to do it. I obtain the pid by assigning return to a value like this:
return = subprocess.Popen(["sudo", "scrolling-text-example", "-y7"])
x= return.pid
When when I am ready to end this subprocess I am using this code:
subprocess.call(["sudo","kill",str(x)])
This does not kill the subprocess, but if I open terminal (let's say x is 1234), and type: sudo kill 1234 , it will kill the subprocess.
Use x = str(return pid) and subprocess.call(["sudo","kill","-9",x]) and then try to grant root privileges. And, this allows to turn the process number to a string before calling the subprocess. Also, as I mentioned, use -9 (or -15 if you prefer using that). (Try to kill 1014 process too).
I found that the main process I identify with x = return.pid actually runs a child process which is the one I needed to kill, so from the parent process identified, we need to kill a child processes. The addition of "-P" includes child processes in this situation.
The following command structure is what I needed:
subprocess.call(["sudo","pkill","-9","-P",x])
I need to gain the pid of a program in order to suspend it temporarily. How can I gain the pid of a program using python with just its username in order to use this pid along with psutil to suspend the process? Let's just call it process.exe for now.
I have tried using item for item along with psuti. However, this gives me additional text along with the pid and I am unsure how to remove this unnecesary text.
I have tried using os.getpid but this gives me the pid of Python rather than the process I want to get the pid of.
1.
import psutil
pid = [item for item in psutil.process_iter() if item.name() == 'process.exe']
print(pid)
2.
import os
pid = os.getpid()
print(pid)
For (1) I want the output to just be
pid=x
However, right now it is:
[psutil.Process(pid=x, name='process.exe', started='14:11:40')]
In 1, you are receiving process object which contains all the information about process, you can use the process object to fetch the pid:
pid[0].pid
In 2nd ,
os.getpid()
returns the process id of python interpreter and is of hardly any use under python interpreter, only use it when you are running some python script to get its process id.
Say I have a script like this:
p = subprocess.Popen(['python forked_job.py'], shell=True)
status = p.wait()
# Do something with status
And then forked_job.py looks like this:
import os
import sys
print 'hi'
pid = os.fork()
if pid == 0:
sys.exit(do_some_work())
else:
sys.exit(do_other_work())
How can I make sure both processes return a 0 status code?
When you fork, you have a parent and a non-parent process. When pid == 0, you are in the child process; your else statement is when you're within the parent process.
Similar to calling Popen.wait, as you do in the first script, you want to call os.wait in the second one.
From the docs:
os.wait()
Wait for completion of a child process, and return a tuple
containing its pid and exit status indication: a 16-bit number, whose
low byte is the signal number that killed the process, and whose high
byte is the exit status (if the signal number is zero); the high bit
of the low byte is set if a core file was produced.
Availability: Unix
As you can see, this of course assumes that you're running unix. Since os.fork is also Unix-only, this seems likely.
So, have the parent call os.wait and reflect the status back up in what the parent returns.
One thing to note, though it probably doesn't matter, and you're probably aware. You're technically not doing this:
main_script
/ \
forked_job forked_job
But instead:
main_script
|
forked_job_parent
|
forked_job_child
(I'm attempting to show the "ownership", and hence the usage of the second wait.)
Environment: Raspberry Pi Wheezy
I have a python program that uses Popen to call another python program
from subprocess import *
oJob = Popen('sudo python mypgm.py',shell=True)
Another menu option is supposed to end the job immediately
oJob.kill()
but the job is still running??
When you add the option shell=True, python launches a shell and the shell in turn launches the process python mymgm.py. You are killing the shell process here which doesn't kill its own child that runs mymgm.py.
To ensure, that child process gets killed on oJob.kill, you need to group them all under one process group and make shell process, the group leader.
The code is,
import os
import signal
import subprocess
# The os.setsid() is passed in the argument preexec_fn so
# it's run after the fork() and before exec() to run the shell.
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(pro.pid, signal.SIGTERM) # Send the signal to all the process groups
When you send SIGTERM signal to the shell process, it will kill all its child process as well.
You need to add a creation flag arg
oJob = Popen('sudo python mypgm.py',shell=True, creationflags = subprocess.CREATE_NEW_PROCESS_GROUP)
source
subprocess.CREATE_NEW_PROCESS_GROUP
A Popen creationflags parameter to specify that a new process group will be created. This flag is necessary for using os.kill() on the subprocess.
EDIT I agree with the comment on how to import stuff and why you are getting something is undefined. Also the other answer seems to be on the right track getting the pid
import subprocess as sub
oJob = sub.Popen('sudo python mypgm.py', creationflags = sub.CREATE_NEW_PROCESS_GROUP)
oJob.kill()
Warning Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input: