I'd like to start a process, wait 2 seconds and print out whatever is in the stderr and stdout pipes so far and then exit. Here is the code I have so far and it doesn't seem to work as hoped. What am I doing wrong?
There are 3 issues:
The program as it stands prints out "done" then waits for the
suprocess to complete before printing out the first line.
As it stands, the script reads one line. How to read to the end of the current buffer?
will the subprocess exit if the calling script exits? If so, how should I modify the
function call so that the subprocess runs to completion even if the
calling script exits?
cmdStr = "./stepper.py"
proc = subprocess.Popen(cmdStr, shell=True, bufsize=-1, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
print "polling"
time.sleep(2)
print "done"
print proc.stdout.readline()
Here is what the stepper.py looks like:
out = open("stepper.log", 'w')
for idx in range(3):
time.sleep(2)
print "Idx",idx
sys.stdout.flush()
out.write("%d\n"%(idx))
print "fnished"
out.write("cloing\n")
out.close()
Related
I have a script where I launch with popen a shell command.
The problem is that the script doesn't wait until that popen command is finished and go continues right away.
om_points = os.popen(command, "w")
.....
How can I tell to my Python script to wait until the shell command has finished?
Depending on how you want to work your script you have two options. If you want the commands to block and not do anything while it is executing, you can just use subprocess.call.
#start and block until done
subprocess.call([data["om_points"], ">", diz['d']+"/points.xml"])
If you want to do things while it is executing or feed things into stdin, you can use communicate after the popen call.
#start and process things, then wait
p = subprocess.Popen([data["om_points"], ">", diz['d']+"/points.xml"])
print "Happens while running"
p.communicate() #now wait plus that you can send commands to process
As stated in the documentation, wait can deadlock, so communicate is advisable.
You can you use subprocess to achieve this.
import subprocess
#This command could have multiple commands separated by a new line \n
some_command = "export PATH=$PATH://server.sample.mo/app/bin \n customupload abc.txt"
p = subprocess.Popen(some_command, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
#This makes the wait possible
p_status = p.wait()
#This will give you the output of the command being executed
print "Command output: " + output
Force popen to not continue until all output is read by doing:
os.popen(command).read()
Let the command you are trying to pass be
os.system('x')
then you covert it to a statement
t = os.system('x')
now the python will be waiting for the output from the commandline so that it could be assigned to the variable t.
wait() works fine for me. The subprocesses p1, p2 and p3 are executed at the same. Therefore, all processes are done after 3 seconds.
import subprocess
processes = []
p1 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p2 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p3 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
processes.append(p1)
processes.append(p2)
processes.append(p3)
for p in processes:
if p.wait() != 0:
print("There was an error")
print("all processed finished")
What you are looking for is the wait method.
I think process.communicate() would be suitable for output having small size. For larger output it would not be the best approach.
I am trying to execute a batch script through python and seems like subprocess.Popen is executing the command and it's getting stuck there on terminal and not printing any output though copy completed. Can you help on this.
ps_copy_command = "call copyfiles.cmd"
process=subprocess.Popen(["cmd", "/C", ps_copy_command, password], stdout=subprocess.PIPE, stderr=subprocess.PIPE);
process.wait()
print("\tCopy completed ")
output = process.stdout.read()
print (output)
If the code is stuked in output = process.stdout.read(), something similar happened to me some time ago. The problem is that process.stdout.read() wont return something until stdout has something in it. Example:
Imagine your batch script is doing some task that takes 10 seconds and then print 'Done!'
python is going to wait until 'Done!' returned by the script.
They way i solved this problems is by adding Threads. One thread is reading the stdout and the other is waiting n seconds, then i join the waiting thread and if the stdout thread has something in it, i print it
import time
import threading
def wait_thread():
seconds = 0
while seconds < 2:
time.sleep(1)
seconds += 1
return True
def stdout_thread():
global output
output = process.stdout.read()
output=None
t1 = threading.Thread(target=wait_thread)
t2 = threading.Thread(target=stdout_thread)
t1.start()
t2.start()
t1.join() # wait until waiting_thread is end
if output:
print(output)
else:
print("No output")
I am using subprocess.run to address this.
process=subprocess.run(["cmd", "/C", ps_copy_command, password], stdout=subprocess.PIPE);
print(process.returncode)
print(process.stdout)
I am working on a python program which implements the cmd window.
I am using subproccess with PIPE.
If for example i write "dir" (by stdout), I use communicate() in order to get the response from the cmd and it does work.
The problem is that in a while True loop, this doesn't work more than one time, it seems like the subprocess closes itself..
Help me please
import subprocess
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=None)
x=""
while x!="x":
x = raw_input("insert a command \n")
process.stdin.write(x+"\n")
o,e=process.communicate()
print o
process.stdin.close()
The main problem is that trying to read subprocess.PIPE deadlocks when the program is still running but there is nothing to read from stdout. communicate() manually terminates the process to stop this.
A solution would be to put the piece of code that reads stdout in another thread, and then access it via Queue, which allows for reliable sharing of data between threads by timing out instead of deadlocking.
The new thread will read standard out continuously, stopping when there is no more data.
Each line will be grabbed from the queue stream until a timeout is reached(no more data in Queue), then the list of lines will be displayed to the screen.
This process will work for non-interactive programs
import subprocess
import threading
import Queue
def read_stdout(stdout, queue):
while True:
queue.put(stdout.readline()) #This hangs when there is no IO
process = subprocess.Popen('cmd.exe', shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
q = Queue.Queue()
t = threading.Thread(target=read_stdout, args=(process.stdout, q))
t.daemon = True # t stops when the main thread stops
t.start()
while True:
x = raw_input("insert a command \n")
if x == "x":
break
process.stdin.write(x + "\n")
o = []
try:
while True:
o.append(q.get(timeout=.1))
except Queue.Empty:
print ''.join(o)
I need to start a Python script in Python and keep it up.
For argument purposes, say that there is a program called slave.py
if __name__=='__main__':
done = False
while not done:
line = raw_input()
print line
if line.lower() == 'quit' or line.lower() == 'q':
done = True
break
stringLen = len(line)
print "len: %d " % stringLen
The program "slave.py" receives a string, calculates the input length of the string
and outputs the length to stdout with a print statement.
It should run until I give it a "quit" or "q" as an input.
Meanwhile, in another program called "master.py", I will invoke "slave.py"
# Master.py
if __name__=='__main__':
# Start a subprocess of "slave.py"
slave = subprocess.Popen('python slave.py', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
x = "Hello world!"
(stdout, stderr) = slave.communicate(x)
# This works - returns 12
print "stdout: ", stdout
x = "name is"
# The code bombs here with a 'ValueError: I/O operation on closed file'
(stdout, stderr) = slave.communicate(x)
print "stdout: ", stdout
However, the slave.py program that I opened using Popen() only takes one communicate() call. It ends after that one communicate() call.
For this example, I would like to have slave.py keep running, as a server in a client-server model, until it receives a "quit" or "q" string via communicate. How would I do that with the subprocess.Popen() call?
If each input line produces known number of output lines then you could:
import sys
from subprocess import Popen, PIPE
p = Popen([sys.executable, '-u', 'slave.py'], stdin=PIPE, stdout=PIPE)
def send(input):
print >>p.stdin, input
print p.stdout.readline(), # print input
response = p.stdout.readline()
if response:
print response, # or just return it
else: # EOF
p.stdout.close()
send("hello world")
# ...
send("name is")
send("q")
p.stdin.close() # nothing more to send
print 'waiting'
p.wait()
print 'done'
Otherwise you might need threads to read the output asynchronously.
If you indent to keep slave alive over the parent life-cycle you can daemonize it:
http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/
Alternatively you could look multiprocess API:
http://docs.python.org/library/multiprocessing.html
... which allows thread-like processing over different child processes.
I have a script where I launch with popen a shell command.
The problem is that the script doesn't wait until that popen command is finished and go continues right away.
om_points = os.popen(command, "w")
.....
How can I tell to my Python script to wait until the shell command has finished?
Depending on how you want to work your script you have two options. If you want the commands to block and not do anything while it is executing, you can just use subprocess.call.
#start and block until done
subprocess.call([data["om_points"], ">", diz['d']+"/points.xml"])
If you want to do things while it is executing or feed things into stdin, you can use communicate after the popen call.
#start and process things, then wait
p = subprocess.Popen([data["om_points"], ">", diz['d']+"/points.xml"])
print "Happens while running"
p.communicate() #now wait plus that you can send commands to process
As stated in the documentation, wait can deadlock, so communicate is advisable.
You can you use subprocess to achieve this.
import subprocess
#This command could have multiple commands separated by a new line \n
some_command = "export PATH=$PATH://server.sample.mo/app/bin \n customupload abc.txt"
p = subprocess.Popen(some_command, stdout=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
#This makes the wait possible
p_status = p.wait()
#This will give you the output of the command being executed
print "Command output: " + output
Force popen to not continue until all output is read by doing:
os.popen(command).read()
Let the command you are trying to pass be
os.system('x')
then you covert it to a statement
t = os.system('x')
now the python will be waiting for the output from the commandline so that it could be assigned to the variable t.
What you are looking for is the wait method.
wait() works fine for me. The subprocesses p1, p2 and p3 are executed at the same. Therefore, all processes are done after 3 seconds.
import subprocess
processes = []
p1 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p2 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
p3 = subprocess.Popen("sleep 3", stdout=subprocess.PIPE, shell=True)
processes.append(p1)
processes.append(p2)
processes.append(p3)
for p in processes:
if p.wait() != 0:
print("There was an error")
print("all processed finished")
I think process.communicate() would be suitable for output having small size. For larger output it would not be the best approach.