I am tring to handle status exit with popen but it gives a error, the code is:
import os
try:
res = os.popen("ping -c 4 www.google.com")
except IOError:
print "ISPerror: popen"
try:
#wait = [0,0]
wait = os.wait()
except IOError:
print "ISPerror:os.wait"
if wait[1] != 0:
print(" os.wait:exit status != 0\n")
else:
print ("os.wait:"+str(wait))
print("before read")
result = res.read()
print ("after read:")
print ("exiting")
But it if giving the following error:
close failed in file object destructor:
IOError: [Errno 10] No child processes
Error Explanation
It looks like this error is occurring because upon exiting, the program tries to destroy res, which involves calling the res.close() method. But somehow invoking os.wait() has already closed the object. So it's trying to close res twice, resulting in the error. If the call to os.wait() is removed, the error no longer occurs.
import os
try:
res = os.popen("ping -c 4 www.google.com")
except IOError:
print "ISPerror: popen"
print("before read")
result = res.read()
res.close() # explicitly close the object
print ("after read: {}".format(result)
print ("exiting")
But this leaves you with the problem of knowing when the process has finished. And since res just has type file, your options are limited. I would instead move to using subprocess.Popen
Using subprocess.Popen
To use subprocess.Popen, you pass your command in as a list of strings. To be able to access the output of the process, you set the stdout argument to subprocess.PIPE, which allows you to access stdout later on using file operations. Then, instead of using the regular os.wait() method, subprocess.Popen objects have their own wait methods you call directly on the object, this also sets the returncode value which represents the exit status.
import os
import subprocess
# list of strings representing the command
args = ['ping', '-c', '4', 'www.google.com']
try:
# stdout = subprocess.PIPE lets you redirect the output
res = subprocess.Popen(args, stdout=subprocess.PIPE)
except OSError:
print "error: popen"
exit(-1) # if the subprocess call failed, there's not much point in continuing
res.wait() # wait for process to finish; this also sets the returncode variable inside 'res'
if res.returncode != 0:
print(" os.wait:exit status != 0\n")
else:
print ("os.wait:({},{})".format(res.pid, res.returncode)
# access the output from stdout
result = res.stdout.read()
print ("after read: {}".format(result))
print ("exiting")
Related
Hi I'm trying to make a video converter for django with python, I forked django-ffmpeg module which does almost everything I want, except that doesn't catch error if conversion failed.
Basically the module passes to the command line interface the ffmpeg command to make the conversion like this:
/usr/bin/ffmpeg -hide_banner -nostats -i %(input_file)s -target
film-dvd %(output_file)
Module uses this method to pass the ffmpeg command to cli and get the output:
def _cli(self, cmd, without_output=False):
print 'cli'
if os.name == 'posix':
import commands
return commands.getoutput(cmd)
else:
import subprocess
if without_output:
DEVNULL = open(os.devnull, 'wb')
subprocess.Popen(cmd, stdout=DEVNULL, stderr=DEVNULL)
else:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
return p.stdout.read()
But for example, I you upload an corrupted video file it only returns the ffmpeg message printed on the cli, but nothing is triggered to know that something failed
This is an ffmpeg sample output when conversion failed:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x237d500] Format mov,mp4,m4a,3gp,3g2,mj2
detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 # 0x237d500] moov atom not found
/home/user/PycharmProjects/videotest/media/videos/orig/270f412927f3405aba041265725cdf6b.mp4:
Invalid data found when processing input
I was wondering if there's any way to make that an exception and how, so I can handle it easy.
The only option that came to my mind is to search: "Invalid data found when processing input" in the cli output message string but I'm not shure that if this is the best approach. Anyone can help me and guide me with this please.
You need to check the returncode of the Popen object that you're creating.
Check the docs: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
Your code should wait for the subprocess to finish (with wait) & then check the returncode. If the returncode is != 0 then you can raise any exception you want.
This is how I implemented it in case it's useful to someone else:
def _cli(self, cmd):
errors = False
import subprocess
try:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdoutdata, stderrdata = p.communicate()
if p.wait() != 0:
# Handle error / raise exception
errors = True
print "There were some errors"
return stderrdata, errors
print 'conversion success '
return stderrdata, errors
except OSError as e:
errors = True
return e.strerror, errors
I have a class with some functions that basically do output checks on data, the functions of this class are called using a subprocess.
Now if the output check fails the subprocess has a sys.exit call with a different code depending on which check it failed.
In the main code I have this:
try:
exitCode = 0
#import module for current test
teststr = os.path.splitext(test)[0]
os.chdir(fs.scriptFolder)
test = __import__(teststr)
#delete old output folder and create a new one
if os.path.isdir(fs.outputFolder):
rmtree(fs.outputFolder)
os.mkdir(fs.outputFolder)
# run the test passed into the function as a new subprocess
os.chdir(fs.pythonFolder)
myEnv=os.environ.copy()
myEnv["x"] = "ON"
testSubprocess = Popen(['python', test.testInfo.network + '.py', teststr], env=myEnv)
testSubprocess.wait()
result = addFields(test)
# poke the data into the postgresql database if the network ran successfully
if testSubprocess.returncode == 0:
uploadToPerfDatabase(result)
elif testSubprocess.returncode == 1:
raise Exception("Incorrect total number of rows on output, expected: " + str(test.testInfo.outputValidationProps['TotalRowCount']))
exitCode = 1
elif testSubprocess.returncode == 2:
raise Exception("Incorrect number of columns on output, expected: " + str(test.testInfo.outputValidationProps['ColumnCount']))
exitCode = 1
except Exception as e:
log.out(teststr + " failed", True)
log.out(str(e))
log.out(traceback.format_exc())
exitCode = 1
return exitCode
Now the output from this shows all traceback and python exceptions for the sys.exit calls in the subprocess.
Im actually logging all errors so I dont want anything being displayed in the command prompt unless ive printed it manually.
I'm not quite sure how to go about this.
You can specify stderr to write to os.devnull with the subprocess.DEVNULL flag:
p = Popen(['python', '-c', 'print(1/0)'], stderr=subprocess.DEVNULL)
subprocess.DEVNULL
Special value that can be used as the stdin, stdout or stderr argument to Popen and indicates that the special file os.devnull will be used.
New in version 3.3. docs
Is it possible to redirect stdout and stderr in two different text files? I am struggling with the following code and not getting what I want. With the code below I could only see exit status (0) getting printed in Output_Log.txt:
-------SSH Connection opened and command sent to remote server-----
stdout_data = []
stderr_data = []
stdout_data = sys.stdout
stderr_data = sys.stderr
sys.stdout = open('Output_Log.txt','w')
sys.stderr = open('Error_Log.txt','w')
---------Main Code-------------
while True:
try:
if session.recv_ready():
stdout_data.append(session.recv(16384))
if session.recv_stderr_ready():
stderr_data.append(session.recv_stderr(16384))
if session.exit_status_ready():
break
----------Main code ends-------------------
except socket.timeout:
print("SSH channel timeout exceeded.")
break
except Exception:
traceback.print_exc()
break
print 'exit status: ', session.recv_exit_status()
print ''.join(stdout_data)
print ''.join(stderr_data)
In your python code you should be able to do the following at the end.
sys.stdout.write(stdout_data)
sys.stderr.write(stderr_data)
I have no knowledge about sys.stdout or whatever, but have you tried moving some code around? Like write the stuff at the end, or use something like:
f = open('Error_Log.txt','w')
f.write(stdout_data)
Best guess lol!
Have you considered just using the logging module instead?
To capture the output of a SSH command execution session in a txt file I did this:
sys.stdout = open('Output_Log.txt','w')
Now, once the SSH session is closed, I want to stop writing to the file and let the other print statements come to the console.
But that is not happening. The subsequent prints are overwriting my above log file and I am loosing my SSH command execution data.
Here is the complete code:
session.exec_command('cd /project/neptune/neptune_psv/fw; ./Do_Regr.sh -i Testlist_Regression.in -m 135.24.237.198 -g')
stdout_data = []
stderr_data = []
sys.stdout = open('Output_Log.txt','w')
sys.stderr = open('Error_Log.txt','w')
while True:
try:
if session.recv_ready():
stdout_data.append(session.recv(16384))
if session.recv_stderr_ready():
stderr_data.append(session.recv_stderr(16384))
if session.exit_status_ready():
break
except socket.timeout:
print("SSH channel timeout exceeded.")
break
except Exception:
traceback.print_exc()
break
print 'exit status: ', session.recv_exit_status()
print ''.join(stdout_data)
print ''.join(stderr_data)
session.close()
trans.close()
print "############Regression Complete############"
When I open Output_Log.txt I only find the last print. Whereas, if I comment the last print statement (Regression Complete) then the Output_Log nicely captures the stdout_data of the session.
Instead of redirecting sys.stdout and sys.stderr, write to the files directly. You are already capturing the output in lists -- use that to your advantage:
try:
# capture output streams
finally:
with open('Output_Log.txt', 'w') as output:
output.write(''.join(stdout_data))
output.write('\nexit status: %s' % session.recv_exit_status())
with open('Error_Log.txt', 'w') as output:
output.write(''.join(stderr_data))
To save output of a ssh command in a file, don't redirect Python's stdout/stderr, use subprocess module instead:
from subprocess import call
cmd = ['ssh', 'host', 'cd /project/neptune/neptune_psv/fw; '
'./Do_Regr.sh -i Testlist_Regression.in -m 135.24.237.198 -g']
with open('Output_Log.txt', 'w') as out, open('Error_Log.txt','w') as err:
rc = call(cmd, stdout=out, stderr=err)
Or if you want to continue using paramiko; a minimal change to your code is to use out.write instead of stdout_data.append and err.write instead of stderr_data.append where out, err are corresponding file objects. Do not redirect sys.stdout, sys.stderr -- it is unnecessary here.
To avoid loosing data you should read possibly buffered data even if the subprocess is finished i.e., read until EOF (until an empty result) after if session.exit_status_ready().
Well, with the existing code I found one solution:
stdout_data = []
stderr_data = []
temp = sys.stdout --------- Added this
sys.stdout = open('Output_Log.txt','w')
sys.stderr = open('Error_Log.txt','w')
---------------Main Code---------------------
print 'exit status: ', session.recv_exit_status()
print ''.join(stdout_data)
print ''.join(stderr_data)
session.close()
trans.close()
sys.stdout.close()
sys.stdout = temp ------ Added this
print "############Regression Complete############"
This works good for me. But I will try both the solutions suggested above and reply later.
My python code is using a subprocess to call "ifconfig" through the shell and uses ">" to write the output to a text file. When the subprocess finishes, and returns success, I read the output file. I do this at fixed intervals to monitor the network status, but occasionally I'm unable to open the output file. I was just reading that Popen has optional arguments for stdout and stderr, which may be safer/better supported, but I'm curious to why my current version fails. My code is below. There are a few objects and macros from my library included without explanation, but I think the code is still clear enough for this question.
Why does opening the output file occasionally fail? Is it possible the file is not ready when the subprocess returns? What would be a way to guarantee it's ready to open?
# Build command line expression.
expr = 'ifconfig' + ' >' + outputFile + ' 2>&1'
try:
# Execute command line expression.
p = subprocess.Popen(expr, shell=True)
except:
Error("Unable to open subprocess.")
if(p is None):
Error("Unable to create subprocess.")
# Wait until command line expression has been executed.
wait = Wait.Wait(Constants.MIN_TIME_TO_QUERY_NETWORK_INFO, Constants.MAX_TIME_TO_QUERY_NETWORK_INFO)
#Execute command then wait for timeout.
if (wait.StopUntilCondition(operator.ne, (p.poll,), None, True)):
p.kill()
Error("Get subnet mask subprocess timed out.")
if(not p.poll() == 0):
Error("Failed to get network information from operating system.")
Warning("About to read output file from get subnet mask...")
# Read temporary output file.
f = open(outputFile, "r")
networkInfo = f.read()
f.close()
To avoid the corrupted/missing output, you should call p.wait() before trying to read the file. You don't need to use a file in this case:
from subprocess import check_output, STDOUT
network_info = check_output('ifconfig', stderr=STDOUT)
If you want to interrupt ifconfig before it is done and read its output; see Stop reading process output in Python without hang?
It is probably easier to just use subprocess.Popen() and write that output to a file yourself. Note that if you want to capture both stdout and stderr you cannot use subprocess.check_output
try:
p = subprocess.Popen(['ifconfig'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
outdata, errdata = p.communicate()
except Exception as e:
print "An error occurred:", e
with open(outputFile, 'w') as outf:
outf.write(outdata)
With this you don't have to wait; communicate won't return before the process is finished.
If you just want to capture stdout, use subprocess.check_output instead;
try:
outdata = subprocess.check_output(['ifconfig'])
except Exception as e:
print "An error occurred:", e
with open(outputFile, 'w') as outf:
outf.write(outdata)