This program should echo the pid of sleep immediately:
import subprocess
subprocess.check_output("sleep 1 & echo $!", shell=True)
Running this on the shell directly, it immediately prints the pid, but running it in python, the & is ignored and it takes 1 second before echo is executed.
How can I get this to work with only one execution of check_output (or another function of subprocess)?
(This is a simplified example, in reality instead of sleep 1 I'd put my own executable)
check_output waits for the output pipes to close and sleep has them too. You can redirect to /dev/null for an immediate return.
subprocess.check_output("sleep 1 >/dev/null 2>&1 & echo $!", shell=True)
UPDATE
Its hard to tell if sleep 1 really did run in the background so I wrote a slightly larger test.
test.py - writes time to stdout for 5 seconds
import time
for i in range(5):
print(time.strftime('%H:%M:%S'), flush=True)
time.sleep(1)
print('done', flush=True)
runner.py - runs the test redirecting stdout to a file and monitors the file.
import subprocess as subp
import time
import os
# run program in background
pid = int(subp.check_output("python3 test.py >test.out 2>&1 & echo $!",
shell=True))
print("pid", pid)
# monitor output file
pos = 0
done = False
while not done:
time.sleep(.1)
if os.stat('test.out').st_size > pos:
with open('test.out', 'rb') as fp:
fp.seek(pos)
for line in fp.readlines():
print(line.strip().decode())
done = b'done' in line
pos = fp.tell()
print("test complete")
Running it, I get
td#mintyfresh ~/tmp $ python3 runner.py
pid 24353
09:32:18
09:32:19
09:32:20
09:32:21
09:32:22
done
test complete
Related
There's quite a bit of posts related to collecting live output from a process that was launched using Python's subprocess module. When I attempt these solutions between my two test scripts, one being a (ba)sh script and the other being a Python script, the Python script fails to have its output read live. Instead when the Python script is ran by subprocess it ends up waiting until the process has completed to flush it to PIPE. The constraints I'm bounded by is that I do want a way to retrieve live output from subprocess for the Python script.
Tested on Ubuntu 20.04 & Windows, Shell script ran on Ubuntu 20.04.
Calling code:
import shlex
import subprocess
# invoke process
process = subprocess.Popen('python test.py',shell=True,stdout=subprocess.PIPE) #Shell true/false results in "not live" output
# Poll process.stdout to show stdout live
while True:
output = process.stdout.readline() # <-- Hangs here on calling test.py, doesn't hang on test.sh
if process.poll() is not None:
break
if output:
print(output.strip())
rc = process.poll()
test.py <-- Waits until it has completed to print out entire output
import time
for x in range(10):
print(x)
time.sleep(1)
test.sh <-- Prints out live in Python script
#!/bin/bash
for i in $(seq 1 5); do
echo "iteration" $i
sleep 1
done
#stochastic13 Provided a very useful link where the -u switch and PYTHONUNBUFFERED variable being set would work. For my needs, I used PYTHONUNBUFFERED which solved my issue entirely. The Python test script actually executes another Python script to run, which I needed the output on. Despite -u helping for the first script, it wouldn't help for the second as I wouldn't have direct access to said script to add the argument. Instead I went with the environment variable, solution below:
def run_command(command):
os.environ['PYTHONUNBUFFERED'] = '1'
process = Popen(command, shell=False, stdout=PIPE, env=os.environ) # Shell doesn't quite matter for this issue
while True:
output = process.stdout.readline()
if process.poll() is not None:
break
if output:
print(output)
rc = process.poll()
return rc
Above the code passes PYTHONUNBUFFERED and sets it to the environment, any spawned process in subprocess with this environment set will inherit PYTHONUNBUFFERED.
Test Script
import subprocess
from io import TextIOWrapper, TextIOBase, StringIO
from subprocess import PIPE, Popen, call
from tempfile import TemporaryFile
from sarge import run, Capture
# process = Popen('python test2.py', shell=False)
# while True:
# if process.poll() is not None:
# break
# rc = process.poll()
subprocess.call('python test2.py')
Test Script 2
import time
import os
print(list(os.environ.keys()))
for x in range(10):
print('test2', x)
time.sleep(1)
The output is a live capture of stdout from any Python process, not just after completion.
...
b'test2 0\r\n'
b'test2 1\r\n'
b'test2 2\r\n'
b'test2 3\r\n'
...
0
I'm writing a script that produces output in an unpredictable size, I want to know from inside the loop when the script has finished.
This is the code:
#!/usr/bin/env python3
import subprocess
import shlex
def main():
cmd = 'bash -c "for i in $(seq 1 15);do echo $i ;sleep 1;done"'
print(cmd)
p = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE,
universal_newlines=True)
for line in p.stdout:
print(f"file_name: {line.strip()}")
print(p.poll())
if __name__ == "__main__":
main()
The p.poll() is always None even in the last iteration, and it makes sense because after echo it sleeps for 1 second before moving to the next iteration and finishes.
Any way of making it work?
You have already identified the problem, that is, after the subprocess has put out the last line it will still continue to run for one second and so while the program is in the loop the program will always be seen to be running. Even if you move the call to poll outside the loop you may have to wait a bit to give the subprocess a chance to terminate after outputting its final message (I have reduced the loop size -- life is too short):
#!/usr/bin/env python3
import subprocess
import shlex
import time
def main():
cmd = 'bash -c "for i in $(seq 1 5);do echo $i; sleep 1; done;"'
print(cmd)
p = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, universal_newlines=True)
for line in p.stdout:
print(f"file_name: {line.strip()}", flush=True)
print(p.poll())
time.sleep(.1)
print(p.poll())
if __name__ == "__main__":
main()
Prints:
bash -c "for i in $(seq 1 5);do echo $i; sleep 1; done;"
file_name: 1
file_name: 2
file_name: 3
file_name: 4
file_name: 5
None
0
To "get it to work" inside the loop would require special knowledge of what's going on inside the subprocess. Based on the previous piece of code, we would need:
#!/usr/bin/env python3
import subprocess
import shlex
import time
def main():
cmd = 'bash -c "for i in $(seq 1 5);do echo $i; sleep 1; done;"'
print(cmd)
p = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, universal_newlines=True)
for line in p.stdout:
# has to be greater than the sleep time in the subprocess to give the subprocess a chance to terminate
print(f"file_name: {line.strip()}", flush=True)
time.sleep(1.1)
print(p.poll())
if __name__ == "__main__":
main()
Prints:
bash -c "for i in $(seq 1 5);do echo $i; sleep 1; done;"
file_name: 1
None
file_name: 2
None
file_name: 3
None
file_name: 4
None
file_name: 5
0
But this is hardly a practical solution. One would have to ask what is the reason for doing this polling; it offers no useful information unless you are willing to include sleep calls following your reads because there will always be some delay following the last write done by the subprocess and its termination, and these sleep calls are generally wasteful. You should just be reading until there is no more output and then do a p.wait() to wait for the subprocess to terminate, but its's your choice:
#!/usr/bin/env python3
import subprocess
import shlex
def main():
cmd = 'bash -c "for i in $(seq 1 5);do echo $i; sleep 1; done;"'
print(cmd)
p = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, universal_newlines=True)
for line in p.stdout:
print(f"file_name: {line.strip()}", flush=True)
p.wait()
if __name__ == "__main__":
main()
I have a script "run.py" that must print "Hello", launch another script "run2.py", and then terminate (do not wait for run2.py to return).
run2.py is not in the local directory and is only required to print "Hello again".
How can I do this?
# run_path = "C:/Program Files (x86)/xxx/run.py"
# run2_path = "//network_share/folder/run2.py"
**run.py**
import os
print("Hello")
# What do I do here?
# os.execl("//network_share/folder/run2.py")
exit()
**run2.py**
print("Hello again")
This seems to work for a script I have in the same folder I'm running this one in.
This should verify that the first script finishes and doesn't linger while the second script runs in its own process. It is possible on some systems, due to their configuration, the child process will terminate when the parent does. But not in this case...
I put more time into this post to add code that shows how to check if the parent process is still running. This would be a good way for the child to ensure it's exited. Also shows how to pass parameters to the child process.
# launch.py
import subprocess as sp
import os
if __name__ == '__main__':
sp.Popen(['ps']) # Print out runniing processes.
print("launch.py's process id is %s." % os.getpid())
# Give child process this one's process ID in the parameters.
sp.Popen(['python3', 'runinproc.py', str(os.getpid())])
# ^^^ This line above anwers the main question of how to kick off a
# child Python script.
print("exiting launch.py")
Other script.
# runinproc.py
import time
import subprocess as sp
import sys
import os
def is_launcher_running():
try:
# This only checks the status of the process. It doesn't
# kill it, or otherwise affect it.
os.kill(int(sys.argv[1]), 0)
except OSError:
return False
else:
return True
if __name__ == '__main__':
print("runinproc.py was launched by process ID %s" % sys.argv[1])
for i in range(100):
if is_launcher_running():
# Is launch.py still running?
print("[[ launch.py is still running... ]]")
sp.Popen(['ps']) # Print out the running processes.
print("going to sleep for 2 seconds...")
time.sleep(2)
Bash output:
Todds-iMac:pyexperiments todd$ python3 launch.py
launch.py process id is 40975.
exiting launch.py
Todds-iMac:pyexperiments todd$ runinproc.py was launched by process ID 40975
going to sleep for 2 seconds...
PID TTY TIME CMD
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
Note that the first call to the shell, ps from launch.py is executed after launch.py exited. That's why it doesn't show up in the printed process list.
subprocess is your friend, but if you need to not wait, check out the P_NOWAIT--replacing example code in https://docs.python.org/3/library/subprocess.html
EG:
pid = Popen(["/bin/mycmd", "myarg"]).pid
I don't think .communicate is what you need this time around - isn't it more for waiting?
The cleanest way to do this (since both scripts are written in pure Python) is to import the other script as a module and execute its content, placed within a function:
run.py
import os
import sys
sys.path.append("//network_share/folder/")
import run2
print("Hello")
run2.main()
exit()
run2.py
def main():
print("Hello again")
I am writing a small program in Python, to record an audio WITH printing some text at same time.
But my Print is executed until finishing of audio recording.
Can you please help me to resolve this issue?
import picamera, subprocess, os, sys
a1 = "arecord -f cd -D plughw:0 -d 10 a.wav"
subprocess.call(a1,shell= True)
print("Audio record is only for 10sec")
You're using subprocess.call, which blocks:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
You can use a Popen object, which doesn't block:
proc = subprocess.Popen(a1.split())
# code will proceed
# use proc.communicate later on
Or you can have two things run separately using a Thread (which then spawns a process in it's own context):
import picamera, subprocess, os, sys
import threading
def my_process():
a1 = "arecord -f cd -D plughw:0 -d 10 a.wav"
subprocess.call(a1,shell= True)
thread = threading.Thread(target=my_process)
thread.start()
print("Audio record is only for 10sec")
I have a python script that executes linux commands with timeout using a while loop and sleep like below
fout = tempfile.TemporaryFile()
try:
p = subprocess.Popen(["/bin/bash","-c", options.command], bufsize=-1, shell=False, preexec_fn=os.setsid, stdin=subprocess.PIPE, stdout=fout, stderr=subprocess.PIPE)
except:
sys.exit(UNEXPECTED_ERROR)
if options.timeout:
print "options.timeout = %s" % options.timeout
elapsed = 0
time.sleep(0.1) # This sleep is for the delay between Popen and poll() functions
while p.poll() is None:
time.sleep(1)
elapsed = elapsed + 1
print "elapsed = %s" % elapsed
if elapsed >= options.timeout:
# TIMEDOUT
# kill all processes that are in the same child process group
# which kills the process tree
pgid = os.getpgid(p.pid)
os.killpg(pgid, signal.SIGKILL)
p.wait()
fout.close()
sys.exit(TIMEOUT_ERROR)
break
else:
p.wait()
fout.seek(0) #rewind to the beginning of the file
print fout.read(),
fout.close()
sys.exit(p.returncode)
$ time myScript -c "cat file2" 2>&1 -t 5
options.timeout = 5
elapsed = 1
real 0m11.811s
user 0m0.046s
sys 0m1.153s
My question is in that above case even if the timeout is 5 seconds cat continues till it finishes. Am I missing something here? Please help.
It works as expected on Ubuntu:
$ /usr/bin/ssh root#localhost -t 'sync && echo 3 > /proc/sys/vm/drop_caches'
$ /usr/bin/time python2.4 myscript.py 'cat big_file'
timeout
done
0.01user 0.63system 0:05.16elapsed 12%CPU
$ /usr/bin/ssh root#localhost -t 'sync && echo 3 > /proc/sys/vm/drop_caches'
$ /usr/bin/time cat big_file >/dev/null
0.02user 0.82system 0:09.93elapsed 8%CPU
It also work with a shell command:
$ /usr/bin/time python2.4 myscript.py 'while : ; do sleep 1; done'
timeout
done
0.02user 0.00system 0:05.03elapsed 0%CPU
Assumptions:
you can't use time.time() due to possibility of a system clock change
time.clock() doesn't measure children times on Linux
we can't emulate time.monotonic() from Python 3.3 in pure Python
due to ctypes is not available on Python 2.4
it is acceptable to survive hibernation e.g., 2 seconds before hibernation + 3 seconds after computer wakes up whenever it happens if timeout is 5 seconds.
#!/usr/bin/env python2.4
import os
import signal
import sys
import tempfile
import time
from subprocess import Popen
class TimeoutExpired(Exception):
pass
def wait(process, timeout, _sleep_time=.1):
for _ in xrange(int(timeout * 1. / _sleep_time + .5)):
time.sleep(_sleep_time) # NOTE: assume it doesn't wake up earlier
if process.poll() is not None:
return process.wait()
raise TimeoutExpired # NOTE: timeout precision is not very good
f = tempfile.TemporaryFile()
p = Popen(["/bin/bash", "-c", sys.argv[1]], stdout=f, preexec_fn=os.setsid,
close_fds=True)
try:
wait(p, timeout=5)
except TimeoutExpired:
print >>sys.stderr, "timeout"
os.killpg(os.getpgid(p.pid), signal.SIGKILL)
p.wait()
else:
f.seek(0)
for line in f:
print line,
f.close() # delete it
print >>sys.stderr, "done"
Beside of the problems I see in your code
you call Popen() with stdin=subprocess.PIPE and stderr=subprocess.PIPE. But you never handle these pipes. With a command like cat file2, this should be fine, but it can lead to problems.
I can spot a potential misbehaviour: you might have mixed up indentation (as in the 1st version of your question). Assume you have the following:
while p.poll() is None:
time.sleep(1)
elapsed = elapsed + 1
print "elapsed = %s" % elapsed
if elapsed >= options.timeout:
# TIMEDOUT
# kill all processes that are in the same child process group
# which kills the process tree
pgid = os.getpgid(p.pid)
os.killpg(pgid, signal.SIGKILL)
p.wait()
fout.close()
sys.exit(TIMEOUT_ERROR)
break
You don't reach the timeout threshold, and nevertheless p.wait() is called due to a bad indentation. Don't mix up tabs and spaces; PEP 8 suggests to use spaces only and a indentation depth of 4 columns.