I am running two python scripts using subprocess one of them still runs.
import subprocess
subprocess.run("python3 script_with_loop.py & python3 scrip_with_io.py", shell=True)
script_with_loop still runs in the background.
What is the way to kill both scripts if one of them dies?
So, you're basically not using python here, you're using your shell.
a & b runs a, disavows it, and runs b. Since you're using the shell, if you wanted to terminate the background task, you'd have to use shell commands to do that.
Of course, since you're using python, there is a better way.
with subprocess.Popen(["somecommand"]) as proc:
try:
subprocess.run(["othercommand"])
finally:
proc.terminate()
Looking at your code though - python3 script_with_loop.py and python3 script_with_io.py - my guess is you'd be better off using the asyncio module because it basically does what the names of those two files are describing.
you should use threading for this sort of thing. try this.
import threading
def script_with_loop():
try:
# script_with_loop.py code goes here
except:
_exit()
def script_with_io():
try:
# script_with_io.py code goes here
except:
_exit()
threading.Thread(target=script_with_loop, daemon=True).start()
threading.Thread(target=script_with_io, daemon=True).start()
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.
I'm working on a small script. The script should open 3 terminals and interact with this terminals independently.
I am pretty understand that subprocess is the best way to do that. What I've done so far:
# /usr/bin/env python
import subprocess
term1 = subprocess.Popen(["open", "-a", "Terminal"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
term1.communicate(input="pwd")
My problem is I cannot interact with a new terminal. this part term1.communicate(input="pwd") is not working. I cannot send a command to a new Terminal. I also tried term1.communicate(input="pwd\n") but nothing happens
Do you any ideas how can I do that?
P.S. I am using Mac OS.
You can run both commands concurrently without opening terminals.
import subprocess
process1 = subprocess.Popen(["ls", "-l"])
process2 = subprocess.Popen(["ls", "-l"])
If you run that code you will see that the directory is listed twice, interleaved together. You can expand this for your specific needs:
tcprelay1 = subprocess.Popen(["tcprelay", "telnet"])
tcprelay2 = subprocess.Popen(["tcprelay", "--portoffset [arg1] [arg2]")
I'm trying to port some Python code like the following to Ruby:
import pty
pid, fd = pty.fork
if pid == 0:
# figure out what to launch
cmd = get_command_based_on_user_input()
# now replace the forked process with the command
os.exec(cmd)
else:
# read and write to fd like a terminal
Since I need to read and write to the subprocess like a terminal, I understand that I should use Ruby's PTY module in lieu of Kernel.fork. But it does not seem to have an equivalent fork method; I must pass a command as a string. This is the closest I can get to Python's functionality:
require 'pty'
# The Ruby executable, ready to execute some codes
RUBY = %Q|/proc/#{Process.id}/exe -e "%s"|
# A small Ruby program which will eventually replace itself with another program. Very meta.
cmd = "cmd=get_command_based_on_user_input(); exec(cmd)"
r, w, pid = PTY.spawn(RUBY % cmd)
# Read and write from r and w
Obviously some of that is Linux-specific, and that's fine. And obviously some is pseudo-code, but it's the only approach I can find, and I'm only 80% sure that it will work anyway. Surely Ruby has something cleaner?
The important thing is that "get_command_based_on_user_input()" not block the parent process, which is why I stuck it in the child process.
You're probably looking for http://ruby-doc.org/stdlib-1.9.2/libdoc/pty/rdoc/PTY.html, http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-fork and Create a daemon with double-fork in Ruby.
I'd open a PTY with master process, fork and reattach child to said PTY with STDIN.reopen.
I would like to know which testing tools for python support the testing of interactive programs. For example, I have an application launched by:
$ python dummy_program.py
>> Hi whats your name? Joseph
I would like to instrument Joseph so I can emulate that interactive behaviour.
If you are testing an interactive program, consider using expect. It's designed specifically for interacting with console programs (though, more for automating tasks rather than testing).
If you don't like the language expect is based on (tcl) you can try pexpect which also makes it easy to interact with a console program.
Your best bet is probably dependency injection, so that what you'd ordinarily pick up from sys.stdin (for example) is actually an object passed in. So you might do something like this:
import sys
def myapp(stdin, stdout):
print >> stdout, "Hi, what's your name?"
name = stdin.readline()
print >> stdout "Hi,", name
# This might be in a separate test module
def test_myapp():
mock_stdin = [create mock object that has .readline() method]
mock_stdout = [create mock object that has .write() method]
myapp(mock_stdin, mock_stdout)
if __name__ == '__main__':
myapp(sys.stdin, sys.stdout)
Fortunately, Python makes this pretty easy. Here's a more detailed link for an example of mocking stdin: http://konryd.blogspot.com/2010/05/mockity-mock-mock-some-love-for-mock.html
A good example might be the file test_embed.py of the IPython package.
There two different approaches are used:
subprocess
import subprocess
# ...
subprocess.Popen(cmd, env=env, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate(_exit_cmd_string)
pexpect (as already mentioned by Brian Oakley
import pexpect
# ...
child = pexpect.spawn(sys.executable, ['-m', 'IPython', '--colors=nocolor'],
env=env)
# ...
child.sendline("some_command")
child.expect(ipy_prompt)