python subprocess.popen read only whats returned - python

im faily new to python. need to understand more about subprocess.popen.
i have a script that executes another python script. below is the part where my script will try to execute another script.
cmd = ['python %s %s %s %s %s'%(runscript, steps, part_number, serial_number, self.operation)]
p = subprocess.Popen(cmd, shell = True, stdout=subprocess.PIPE)
p.wait()
result = p.stdout.readline()
the problem is, the script that gets executed, i have to print out the result in order to read results through "result = p.stdout.readline()". below is the script that gets executed
def Main():
if sys.argv[1] == "Initiate" :
doFunc = Functions_obj.Initiate()
if doFunc != 0 :
print doFunc
else :
print "Initiate PASS"
elif sys.argv[1] == "Check" :
getDrive = Functions_obj.initialize()
if getDrive == "NoDevice" :
print getDrive
sys.exit()
doFunc = Functions_obj.Identify_Drive()
if doFunc != 0 :
print doFunc
else :
print "Check PASS"
my question is, i want to "return" results from the script that gets executed and not print. how do i do this with subprocess.popen? and how do i use subprocess to get whats returned rather than whats printed

You can't return data through a pipe like a function call. You have to use one of the many IPC mechanisms like pipes, shared memory, message passing, or sockets. Pipes are generally the simplest, and that's what you're doing here. You can send binary data through the pipe, though. You could try pickling your data, assuming it's pickleable.

Related

subprocess.PIPE prevents executable from closing

Why subprocess.PIPE prevents a called executable from closing.
I use the following script to call an executable file with a number of inputs:
import subprocess, time
CREATE_NO_WINDOW = 0x08000000
my_proc = subprocess.Popen("myApp.exe " + ' '.join([str(input1), str(input2), str(input3)]),
startupinfo=subprocess.STARTUPINFO(), stdout=subprocess.PIPE,
creationflags = CREATE_NO_WINDOW)
Then I monitor if the application has finished within a given time (300 seconds) and if not I just kill it. I also read the output of the application to know whether it failed in doing the required tasks.
proc_wait_time = 300
start_time = time.time()
sol_status = 'Fail'
while time.time() - start_time < proc_wait_time:
if (my_proc.poll() is None):
time.sleep(1)
else:
try:
sol_status = my_proc.stdout.read().replace('\r\n \r\n','')
break
except:
sol_status = 'Fail'
break
else:
try: my_proc.kill()
except: None
sol_status = 'Frozen'
if sol_status in ['Fail', 'Frozen']:
print ('Failed running my_proc')
As you can note from the code I need to wait for myApp.exe to finish, however, sometimes myApp.exe freezes. Since the script above is part of a loop, I need to identify such a situation (by a timer), keep track of it and kill myApp.exe so that the whole script doesn't get stuck!
Now, the issue is that if I use subprocess.PIPE (which I suppose I have to if I want read the output of the application) then myApp.exe doesn't close after finishing and consequently my_proc.poll() is None is always True.
I am using Python 2.7.
There was a pipe buffer limit/bug in case of huge amounts of data written to subprocess.PIPE. The easiest way to fix it is to pipe the data directly into a file:
_stdoutHandler = open('C:/somePath/stdout.log', 'w')
_stderrHandler = open('C:/somePath/stderr.log', 'w')
my_proc = subprocess.Popen(
"myApp.exe " + ' '.join([str(input1), str(input2), str(input3)]),
stdout=_stdoutHandler,
stderr=_stderrHandler,
startupinfo=subprocess.STARTUPINFO(),
creationflags=CREATE_NO_WINDOW
)
...
_stdoutHandler.close()
_stderrHandler.close()

Python 3.3+: How to suppress exceptions in subprocess.Popen()?

I have a class with some functions that basically do output checks on data, the functions of this class are called using a subprocess.
Now if the output check fails the subprocess has a sys.exit call with a different code depending on which check it failed.
In the main code I have this:
try:
exitCode = 0
#import module for current test
teststr = os.path.splitext(test)[0]
os.chdir(fs.scriptFolder)
test = __import__(teststr)
#delete old output folder and create a new one
if os.path.isdir(fs.outputFolder):
rmtree(fs.outputFolder)
os.mkdir(fs.outputFolder)
# run the test passed into the function as a new subprocess
os.chdir(fs.pythonFolder)
myEnv=os.environ.copy()
myEnv["x"] = "ON"
testSubprocess = Popen(['python', test.testInfo.network + '.py', teststr], env=myEnv)
testSubprocess.wait()
result = addFields(test)
# poke the data into the postgresql database if the network ran successfully
if testSubprocess.returncode == 0:
uploadToPerfDatabase(result)
elif testSubprocess.returncode == 1:
raise Exception("Incorrect total number of rows on output, expected: " + str(test.testInfo.outputValidationProps['TotalRowCount']))
exitCode = 1
elif testSubprocess.returncode == 2:
raise Exception("Incorrect number of columns on output, expected: " + str(test.testInfo.outputValidationProps['ColumnCount']))
exitCode = 1
except Exception as e:
log.out(teststr + " failed", True)
log.out(str(e))
log.out(traceback.format_exc())
exitCode = 1
return exitCode
Now the output from this shows all traceback and python exceptions for the sys.exit calls in the subprocess.
Im actually logging all errors so I dont want anything being displayed in the command prompt unless ive printed it manually.
I'm not quite sure how to go about this.
You can specify stderr to write to os.devnull with the subprocess.DEVNULL flag:
p = Popen(['python', '-c', 'print(1/0)'], stderr=subprocess.DEVNULL)
subprocess.DEVNULL
Special value that can be used as the stdin, stdout or stderr argument to Popen and indicates that the special file os.devnull will be used.
New in version 3.3. docs

How to check the status of a shell script using subprocess module in Python?

I have a simple Python script which will execute a shell script using subprocess mdoule in Python.
Below is my Python shell script which is calling testing.sh shell script and it works fine.
import os
import json
import subprocess
jsonData = '{"pp": [0,3,5,7,9], "sp": [1,2,4,6,8]}'
jj = json.loads(jsonData)
print jj['pp']
print jj['sp']
os.putenv( 'jj1', 'Hello World 1')
os.putenv( 'jj2', 'Hello World 2')
os.putenv( 'jj3', ' '.join( str(v) for v in jj['pp'] ) )
os.putenv( 'jj4', ' '.join( str(v) for v in jj['sp'] ) )
print "start"
subprocess.call(['./testing.sh'])
print "end"
And below is my shell script -
#!/bin/bash
for el1 in $jj3
do
echo "$el1"
done
for el2 in $jj4
do
echo "$el2"
done
for i in $( david ); do
echo item: $i
done
Now the question I have is -
if you see my Python script, I am printing start, then executing shell script and then printing end.. So suppose for whatever reason that shell script which I am executing has any problem, then I don't want to print out end.
So in the above example, shell script will not run properly as david is not a linux command so it will throw an error. So how should I see the status of entire bash shell script and then decide whether I need to print end or not?
I have just added a for loop example, it can be any shell script..
Is it possible to do?
You can check stderr of the bash script rather than return code.
proc = subprocess.Popen('testing.sh', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
if stderr:
print "Shell script gave some error"
else:
print "end" # Shell script ran fine.
Just use the returned value from call():
import subprocess
rc = subprocess.call("true")
assert rc == 0 # zero exit status means success
rc = subprocess.call("false")
assert rc != 0 # non-zero means failure
You could use check_call() to raise an exception automatically if the command fails instead of checking the returned code manually:
rc = subprocess.check_call("true") # <-- no exception
assert rc == 0
try:
subprocess.check_call("false") # raises an exception
except subprocess.CalledProcessError as e:
assert e.returncode == 1
else:
assert 0, "never happens"
Well, according to the docs, .call will return the exit code back to you. You may want to check that you actually get an error return code, though. (I think the for loop will still return a 0 code since it more-or-less finished.)

get real time output from Popen

# cmd = "python subscript.py"
cmd = "ping localhost -n 10"
ofile =open("C:\file.log","w")
sp = subprocess.Popen(cmd,bufsize = 1, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
while True:
sp.poll()
line = sp.stdout.readline()
#eline = sp.stderr.readline()
if line:
print line
ofile.write(line)
#if eline:
# print eline
# ofile.write(" ERROR: "+line)
# if (line == "" and eline == ""):
if (line == ""):
break
I'm trying to get output from a subprocess and save it to a log file using the code above. It works well for the ping localhost -n 10. But when using it to call subscript.py, I can not get the output of subscript.py in real time. I will get all output after the termination of subscript.py. any suggestion? Also I have to comment out eline = sp.stderr.readline() in order to get it work. Any idea why the some code won't give me real time output of subscript.py?
subscript.py:
import time
i=0
while (i<5):
time.sleep(1)
i += 1
print "ouput:",i
As rubik mentioned there are couple similar questions asked before. I tried all I found and none of them solve my problem. Hope some one can point the reason why its not working when calling subscript.py.
edit:
problem here: the output of subscript.py was not flushed until the termination of itself.
also the subscript.py does not have any stderr so calling sp.stderr.readline() causes an infinite wait.
solution:
flush output in subscript.py
for all stderr, I just use stderr = subprocess.STDOUT to redirect to stdout.
Have you tried calling sys.stdout.flush() in your subprocess after the print statement? You could be running into output buffering.

Getting output from and giving commands to a python subprocess

I am trying to get output from a subprocess and then give commands to that process based on the preceding output. I need to do this a variable number of times, when the program needs further input. (I also need to be able to hide the subprocess command prompt if possible).
I figured this would be an easy task given that I have seen this problem being discussed in posts from 2003 and it is nearly 2012 and it appears to be a pretty common need and really seems like it should be a basic part of any programming language. Apparently I was wrong and somehow almost 9 years later there is still no standard way of accomplishing this task in a stable, non-destructive, platform independent way!
I don't really understand much about file i/o and buffering or threading so I would prefer a solution that is as simple as possible. If there is a module that accomplishes this that is compatible with python 3.x, I would be very willing to download it. I realize that there are multiple questions that ask basically the same thing, but I have yet to find an answer that addresses the simple task that I am trying to accomplish.
Here is the code I have so far based on a variety of sources; however I have absolutely no idea what to do next. All my attempts ended in failure and some managed to use 100% of my CPU (to do basically nothing) and would not quit.
import subprocess
from subprocess import Popen, PIPE
p = Popen(r'C:\postgis_testing\shellcomm.bat',stdin=PIPE,stdout=PIPE,stderr=subprocess.STDOUT shell=True)
stdout,stdin = p.communicate(b'command string')
In case my question is unclear I am posting the text of the sample batch file that I demonstrates a situation in which it is necessary to send multiple commands to the subprocess (if you type an incorrect command string the program loops).
#echo off
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
If anyone can help me I would very much appreciate it, and I'm sure many others would as well!
EDIT here is the functional code using eryksun's code (next post) :
import subprocess
import threading
import time
import sys
try:
import queue
except ImportError:
import Queue as queue
def read_stdout(stdout, q, p):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output: %s' % outdata)
if __name__ == '__main__':
#setup
p = subprocess.Popen(['shellcomm.bat'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
bufsize=0, shell=True) # I put shell=True to hide prompt
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q, p))
t.daemon = True
t.start()
#command loop
while p.poll() is None:
printout(q)
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
time.sleep(0.1) # I added this to give some time to check for closure (otherwise it doesn't work)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('Return Code: %d' % rc)
However when the script is run from a command prompt the following happens:
C:\Users\username>python C:\postgis_testing\shellcomm7.py
Input: sth
Traceback (most recent call last):
File "C:\postgis_testing\shellcomm7.py", line 51, in <module>
p.stdin.write(cmd)
IOError: [Errno 22] Invalid argument
It seems that the program closes out when run from command prompt. any ideas?
This demo uses a dedicated thread to read from stdout. If you search around, I'm sure you can find a more complete implementation written up in an object oriented interface. At least I can say this is working for me with your provided batch file in both Python 2.7.2 and 3.2.2.
shellcomm.bat:
#echo off
echo Command Loop Test
echo.
:looper
set INPUT=
set /P INPUT=Type the correct command string:
if "%INPUT%" == "command string" (echo you are correct) else (goto looper)
Here's what I get for output based on the sequence of commands "wrong", "still wrong", and "command string":
Output:
Command Loop Test
Type the correct command string:
Input: wrong
Output:
Type the correct command string:
Input: still wrong
Output:
Type the correct command string:
Input: command string
Output:
you are correct
Return Code: 0
For reading the piped output, readline might work sometimes, but set /P INPUT in the batch file naturally isn't writing a line ending. So instead I used lambda: stdout.read(1) to read a byte at a time (not so efficient, but it works). The reading function puts the data on a queue. The main thread gets the output from the queue after it writes a a command. Using a timeout on the get call here makes it wait a small amount of time to ensure the program is waiting for input. Instead you could check the output for prompts to know when the program is expecting input.
All that said, you can't expect a setup like this to work universally because the console program you're trying to interact with might buffer its output when piped. In Unix systems there are some utility commands available that you can insert into a pipe to modify the buffering to be non-buffered, line-buffered, or a given size -- such as stdbuf. There are also ways to trick the program into thinking it's connected to a pty (see pexpect). However, I don't know a way around this problem on Windows if you don't have access to the program's source code to explicitly set the buffering using setvbuf.
import subprocess
import threading
import time
import sys
if sys.version_info.major >= 3:
import queue
else:
import Queue as queue
input = raw_input
def read_stdout(stdout, q):
it = iter(lambda: stdout.read(1), b'')
for c in it:
q.put(c)
if stdout.closed:
break
_encoding = getattr(sys.stdout, 'encoding', 'latin-1')
def get_stdout(q, encoding=_encoding):
out = []
while 1:
try:
out.append(q.get(timeout=0.2))
except queue.Empty:
break
return b''.join(out).rstrip().decode(encoding)
def printout(q):
outdata = get_stdout(q)
if outdata:
print('Output:\n%s' % outdata)
if __name__ == '__main__':
ARGS = ["shellcomm.bat"] ### Modify this
#setup
p = subprocess.Popen(ARGS, bufsize=0, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
q = queue.Queue()
encoding = getattr(sys.stdin, 'encoding', 'utf-8')
#for reading stdout
t = threading.Thread(target=read_stdout, args=(p.stdout, q))
t.daemon = True
t.start()
#command loop
while 1:
printout(q)
if p.poll() is not None or p.stdin.closed:
break
cmd = input('Input: ')
cmd = (cmd + '\n').encode(encoding)
p.stdin.write(cmd)
#tear down
for n in range(4):
rc = p.poll()
if rc is not None:
break
time.sleep(0.25)
else:
p.terminate()
rc = p.poll()
if rc is None:
rc = 1
printout(q)
print('\nReturn Code: %d' % rc)

Categories