Python subprocess always waits for programm [duplicate] - python

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.

While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.

Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.

You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.

I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid

Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'

Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.

You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)

You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.

I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

Related

How to pipe the output of execvp to variable in python

I have an assignment where we are making a shell for the Linux OS. And I have a lot of questions!
I was allowed to do it in python using some of the methods from the os library. The idea is that my program should communicate directly with the linux operating system calls.
So this include:
Create
Open
Close
Read
Write
Exit
Pipe
Exec
Fork
Dup2
Wait
So far I made a working shell which can execute commands with execvp but I am having trouble with the piping stuff.
I was reading this q/a and i felt that I almost understood what i have to do
I guess I have to use Dup2 to write (and maybe read later). Also I am a little confused if I should use read() and write() at some point regarding to piping.
from os import (
execvp,
wait,
fork,
close,
pipe,
dup2,
)
from os import _exit as kill
STDIN = 0
STDOUT = 1
STDERR = 2
CHILD = 0
def piping(cmd):
reading, writing = pipe()
pid = fork()
if pid > CHILD:
wait()
close(writing)
dup2(reading, STDIN)
execvp(cmd[1][0], cmd[1])
kill(127)
elif pid == CHILD:
close(reading)
dup2(writing, STDOUT)
execvp(cmd[0][0], cmd[0])
kill(127)
else:
print('Command not found:', cmd)
piping([['ls', '-l', '/'], ['grep', 'var']])
If I run this code it works. But I don't understand some things:
How can the execvp know that it gets extra arguments from the pipe?
Why should i kill in the end and why is it 127?
How is it possible to run the execvp inside the parent? Is this also possible in C?
If I have a nested pipe eg: ls -l / | grep var | xclip -selection clipboard should I create a new fork then? (maybe some recursion)
It is not a part of the assignment to write to a file, but I might implement it as well later, when I get the piping to work.
Should I use dup2 for that as well or maybe read/write
Thank you in advance! :)

Python, close subprocess with different SID when script ends

I have a python script that launches subprocesses using subprocess.Popen. The subprocess then launches an external command (in my case, it plays an mp3). The python script needs to be able to interrupt the subprocesses, so I used the method described here which gives the subprocess its own session ID. Unfortunately, when I close the python script now, the subprocess will continue to run.
How can I make sure a subprocess launched from a script, but given a different session ID still closes when the python script stops?
Is there any way to kill a Thread in Python?
and make sure you use it as thread
import threading
from subprocess import call
def thread_second():
call(["python", "secondscript.py"])
processThread = threading.Thread(target=thread_second) # <- note extra ','
processThread.start()
print 'the file is run in the background'
TL;DR Change the Popen params: Split up the Popen cmd (ex. "list -l" -> ["list", "-l"]) and use Shell=False
~~~
The best solution I've seen so far was just not to use shell=True as an argument for Popen, this worked because I didn't really need shell=True, I was simply using it because Popen wouldn't recognize my cmd string and I was too lazy too split it into a list of args. This caused me a lot of other problems (ex. using .terminate() becomes a lot more complicated while using shell and needs to have its session id, see here)
Simply splitting the cmd from a string to a list of args lets me use Popen.terminate() without having to give it its own session id, by not having a separate session id the process will be closed when the python script is stopped

How to spawn a background job in python [duplicate]

I have a some Python code that occasionally needs to span a new process to run a shell script in a "fire and forget" manner, i.e. without blocking. The shell script will not communicate with the original Python code and will in fact probably terminate the calling Python process, so the launched shell script cannot be a child process of the calling Python process. I need it to be launched as an independent process.
In other words, let's say I have mycode.py and that launches script.sh. Then mycode.py will continue processing without blocking. The script script.sh will do some things independently and will then actually stop and restart mycode.py. So the process that runs script.py must be completely independent of mycode.py. How exactly can I do this? I think subprocess.Popen will not block, but will still create a child process that terminates as soon as mycode.py stops, which is not what I want.
Try prepending "nohup" to script.sh. You'll probably need to decide what to do with stdout and stderr; I just drop it in the example.
import os
from subprocess import Popen
devnull = open(os.devnull, 'wb') # Use this in Python < 3.3
# Python >= 3.3 has subprocess.DEVNULL
Popen(['nohup', 'script.sh'], stdout=devnull, stderr=devnull)
Just use subprocess.Popen. The following works OK for me on Windows XP / Windows 7 and Python 2.5.4, 2.6.6, and 2.7.4. And after being converted with py2exe - not tried 3.3 - it comes from the need to delete expired test software on the clients machine.
import os
import subprocess
import sys
from tempfile import gettempdir
def ExitAndDestroy(ProgPath):
""" Exit and destroy """
absp = os.path.abspath(ProgPath)
fn = os.path.join(gettempdir(), 'SelfDestruct.bat')
script_lines = [
'#rem Self Destruct Script',
'#echo ERROR - Attempting to run expired test only software',
'#pause',
'#del /F /Q %s' % (absp),
'#echo Deleted Offending File!',
'#del /F /Q %s\n' % (fn),
#'#exit\n',
]
bf = open(fn, 'wt')
bf.write('\n'.join(script_lines))
bf.flush()
bf.close()
p = subprocess.Popen([fn], shell=False)
sys.exit(-1)
if __name__ == "__main__":
ExitAndDestroy(sys.argv[0])

python 2.7 - subprocess control interaction with mpg123

I asked a question related to this several weeks ago on here:
Python, mpg123 and subprocess not properly using stdin.write or communicate
Thanks to help from there I was able to do what I needed at the time. (Didn't call q, but terminated the subprocess to stop it).## Heading ##
Now though I seem to be in another bit of a mess.
from subprocess import Popen, PIPE, STDOUT
p = Popen(["mpg123", "-C", "test.mp3"], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
#wait a few seconds to enter this, "q" without a newline is how the controls for the player work to quit out if it were ran like "mpg123 -C test.mp3" on the command line
p.communicate(input='q')[0]
much like before, I need this to be able to quit out of mpg123 like it would be with it's standard controls (like press 'q' to quit, or '-' to turn volume down, '+' to turn volume up, etc), now I use the code above, which should theoretically work, and it works with similar programs. Does anyone know of a way I can use the controls built into mpg123 (the one accessible by using "mpg123 -C whatever.mp3") using a subprocess? terminate isn't enough anymore as I will need the controls ^_^
EDIT: Many thanks to abarnert for the amazing answer =)
ok, so the new code is simply a slightly modified version of abarnert's answer, however mpg123 doesn't seem to be accepting the commands
import os
import pty
import sys
import time
pid, fd = os.forkpty()
if pid:
time.sleep(5)
os.write(fd, 'b') #this should've restarted the file
time.sleep(5)
os.write(fd, 'q') #unfortunately doesn't quit here =(
time.sleep(5) # quits after this is finished executing
else:
os.spawnl(os.P_WAIT, '/usr/bin/mpg123', '-C', 'TEST file.mp3')
If you really need the controls, you can't just use Popen.
mpg123 only enables terminal control if its stdin is a tty, not if it's a file or pipe. That's why you get this line in the banner:
Terminal control enabled, press 'h' for listing of keys and functions.
And the whole point of Popen (and subprocess, and the POSIX APIs it's built on) is pipes.
So, what can you do about it?
On linux, you can use the pty module. It may also work on other *nix platforms, but it may not—even if it gets built and included in your stdlib. As the docs say:
Because pseudo-terminal handling is highly platform dependent, there is code to do it only for Linux. (The Linux code is supposed to work on other platforms, but hasn’t been tested yet.)
It definitely runs on *BSD platforms on 2.7 and 3.3, and the example in the docs seem to work on both Mac OS X and FreeBSD… but that's as far as I've checked.
Meanwhile, most POSIX platforms will at least have os.forkpty, and that's not much harder, so here's a trivial program that plays the first 5 seconds of a song passed as its first arg:
import os
import pty
import sys
import time
pid, fd = os.forkpty()
if pid:
time.sleep(5)
os.write(fd, 'q')
else:
os.spawnl(os.P_WAIT, # mode
'/usr/local/bin/mpg123', # path
'/usr/local/bin/mpg123', '-C', sys.argv[1]) # args
Note that I used os.spawnl above. This is probably not what you want in a real program; it's for pedagogic purposes, to encourage you to read the docs (and the corresponding manpages) and understand this family of functions.
As the docs explain, this does not use the PATH environment variable, so you need to specify the full path to the program. You can just use spawnlp instead of spawnl to fix this.
Also, spawn may (in fact, always does, although the docs aren't entirely clear) do another fork to execute the child. This really isn't necessary, but spawn does things that you would need to do manually if you just called exec. If you know what you're doing, you may well want to use execl (or execlp) instead of spawnl.
You can even use most of the functionality in subprocess as long as you're careful (do not create any pipes, and remember that you'll end up doing two forks, so make sure to set up the parent/child relationship properly).
Also notice that you need to pass the path to mpg123 twice—once as the path, and then once as the child program's argv[0]. You could also just pass mpg123 the second time. Or, ideally, look at what ps says when you run it from the shell, and pass that. At any rate, you have to pass something as the argv[0]; otherwise, -C ends up being the argv[0], which means mpg123 won't think you gave it a -C flag to enable control keys, but rather than you renamed it to -C and ran it with no flags…
Anyway, you really do need to read the docs to understand what each of these functions does, instead of just treating it like magic code that you don't understand. So, I intentionally used the simplest possible solution to encourage that.
On Windows, there is no such thing as a pty, and no way to do this at all with the facilities built in to Python. You will need to use one of the various third-party libraries for controlling a cmd.exe console (aka DOS prompt) instead.
Based on abarnert's idea, we can open a pseudo-terminal and pass it to subprocess.
import os
import pty
import subprocess
import time
master, slave = os.openpty()
p = subprocess.Popen(['mpg123', '-C', 'music.mp3'], stdin=master)
time.sleep(3)
os.write(slave, 's')
time.sleep(3)
os.write(slave, 's')
time.sleep(6)
os.write(slave, 'q')

Problems killing a process with Python on Solaris

I have a C++ program, called C, that is designed to shut down when it receives a SIGINT signal. I've written a Python program P that runs C as a subprocess. I want P to stop C. I tried 3 things and I'd like to know why some of them didn't work.
Attempt #1:
import subprocess
import signal
import os
p = subprocess.Popen(...)
...
os.killpg(p.pid, signal.SIGINT)
This code gives me the error
OSError [Errno 3]: No such process`
even though the p.pid matches the pid displayed by ps.
Attempt #2:
import subprocess
import signal
import os
p = subprocess.Popen(...)
...
os.system('kill -SIGINT %u' % p.pid)
This gives me the error
sh: kill: bad signal`
even though kill -SIGINT <pid> works from the terminal.
Attempt #3:
import subprocess
import signal
import os
p = subprocess.Popen(...)
...
os.system('kill -2 %u' % p.pid)
This works.
My question is, why didn't #1 and #2 work?
Edit: my original assumption was that since the documentation for os.kill() says New in version 2.7: Windows support, I thought that os.kill() is (a) first available in 2.7 and (b) works in Windows. After reading the answers below, I ran os.kill() on Solaris, which I should have done in the first place sorry, and it does work in 2.4. Obviously, the documentation means that Windows support is new in 2.7. Opps.
The first fails because os.killpg kills a process group, identified by its leader; you have a simple process, not a process group. Try os.kill instead. The second fails because the shell builtin kill understands symbolic signals, but the external command on Solaris doesn't (whereas on *BSD and Linux it does); use a numeric signal (SIGINT is 2 on Solaris, or use Python's predefined signal constants from the signal module). That said, use Popen's own interface instead as mentioned by someone else; don't reinvent the wheel, you're liable to create some corners.
The Popen object has a kill() method that you can invoke as well as a terminate() method and a generic send_signal() method.
I would use one of these rather than trying any of the out of band stuff you'd use with the os interface. You've already got a handle to the process, you should use it!

Categories