Returning gracefully from Matlab back to python - python

I call Matlab code from python as
matlab_cmd_string = MatlabExePth+ " -nosplash -nodesktop -wait -logfile FileIoReg_MatlabRemoteRun.log -minimize -r "
fname = 'CompareMse '
mat_cmd = matlab_cmd_string + fname + ", exit\""
which gets translated as
'C:\Program Files\MATLAB\R2013b\bin\matlab.exe -nosplash -nodesktop
-wait -logfile FileIoReg_MatlabRemoteRun.log -minimize -r CompareMse , exit'
The Matlab code does its job and then prints error and stops execution using following construct:
if(mse> thr)
error('mse has increased');
end
However, the control is not given back to python.
I tried following commands in python:
msg=subprocess.check_output(mat_cmd,stderr=subprocess.STDOUT,shell=False)
msg comes empty and console window dosn't show up anything as control is not got back. Same with following method :
proc = subprocess.Popen(mat_cmd , stdout=subprocess.PIPE, shell=True)
out, err = proc.communicate()
output = out.upper()
proc.returncode
If I write following in matlab,
if(mse> thr)
warning('mse has increased');
return
end
I get control back to python with following:
msg=subprocess.check_output(mat_cmd,stderr=subprocess.STDOUT,shell=False)
proc = subprocess.Popen(mat_cmd , stdout=subprocess.PIPE, shell=True)
out, err = proc.communicate()
output = out.upper()
proc.returncode
msg,out show as "" , err is NONE , and proc.returncode is 0
What is need is functionality in Matlab:
for i=1:3
% Some code here
if(mse> thr)
[print error,return user defined exit code and error message back to python variable]
if (mse_new >mse_old)
[print warning,do not return, but capture warning back to
Python variable]
% some code here
the difficulty with warning is that if the condition for warning happens in loop iteration 1, and not for 2nd and third time, Python should be able to understand that Matlab code had not errors but one warning and should capture that.( and matlab should not exit at iteration 1 of for loop but complete all iterations)
any ideas ?
sedy

Try to use subprocess.check_output(*popenargs, **kwargs). You can capture the output of any given command. Check the Python 2.7 subprocess doc in here
import subprocess
msg = subprocess.check_output([MatlabExePth, "-nosplash", "-wait", "-logfile", "FileIoReg_MatlabRemoteRun.log", "-minimize", "-r", fname, ", exit"])
print msg

Related

Python: subprocess.call and variants fail for a particular application from executed .py but not from python in CLI

I have a strange issue here - I have an application that I'm attempting to launch from python, but all attempts to launch it from within a .py script fail without any discernable output. Testing from within VSCode debugger. Here's some additional oddities:
When I swap in notepad.exe into the .py instead of my target applications path, notepad launches ok.
When I run the script line by line from the CLI (start by launching python, then type out the next 4-5 lines of Python), the script works as expected.
Examples:
#This works in the .py, and from the CLI
import subprocess
cmd = ['C:\\Windows\\system32\\notepad.exe', 'C:\\temp\\myfiles\\test_24.xml']
pipe = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
pipe.wait()
print(pipe)
#This fails in the .py, but works ok when pasted in line by line from the CLI
import subprocess
cmd = ['C:\\temp\\temp_app\\target_application.exe', 'C:\\temp\\myfiles\\test_24.xml']
pipe = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
pipe.wait()
print(pipe)
The result is no output when running the .py
I've tried several other variants, including the following:
import subprocess
tup = 'C:\\temp\\temp_app\\target_application.exe C:\temp\test\test_24.xml'
proc = subprocess.Popen(tup)
proc.wait()
(stdout, stderr) = proc.communicate()
print(stdout)
if proc.returncode != 0:
print("The error is: " + str(stderr))
else:
print("Executed: " + str(tup))
Result:
None
The error is: None
1.082381010055542
Now this method indicates there is an error because we are returning something other than 0 and printing "The error is: None", and this is because stderror is "None". So - is it throwing an error without giving an error?
stdout is also reporting "None".
So, lets try check_call and see what happens:
print("Trying check_call")
try:
subprocess.check_call('C:\\temp\\temp_app\\target_application.exe C:\\temp\\test\\test_24.xml', shell=True)
except subprocess.CalledProcessError as error:
print(error)
Results:
Trying check_call
Command 'C:\temp\temp_app\target_application.exe C:\temp\test\test_24.xml' returned non-zero exit status 1.
I've additionally tried subprocess.run, although it is missing the wait procedure I was hoping to use.
import subprocess
tup = 'C:\\temp\\temp_app\\target_application.exe C:\temp\test\test_24.xml'
proc = subprocess.run(tup, check=True)
proc.wait()
(stdout, stderr) = proc.communicate()
print(stdout)
if proc.returncode != 0:
print("The error is: " + str(stderr))
else:
print("Executed: " + str(tup))
What reasons might be worth chasing, or what other ways of trying to catch an error might work here? I don't know how to interpret "`" as an error result.

Python does not execute script

I have written a Python code to generate a shell script and then run the script using subprocess.
The script file is created, but when I try to run from within the code, it is not doing anything. If I try to run the same script with a file that I have created outside the script, it is working as expected.
Here is my code :
import subprocess
import os
cwd = os.getcwd()
file_name = cwd + "/cmd_file_from_python"
fd = open(file_name,"w")
fd.write("#!/usr/local/bin/tcsh -f\n")
fd.write("echo 'PRINT FROM CMD_FILE_FROM_PYTHON'\n")
fd.close
os.chmod(file_name, 0o777)
cmd=file_name
p = subprocess.Popen(cmd,executable='/bin/ksh', shell=True, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
(stdout,stderr) = p.communicate()
p_status = p.wait()
print "Command output : ", stdout
print "Command outerr : ", stderr
print "Command exit status/return code : ", p_status
print "================================================================"
file_name = cwd + "/cmd_file"
cmd = file_name
p = subprocess.Popen(cmd,executable='/bin/ksh', shell=True, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
(stdout,stderr) = p.communicate()
p_status = p.wait()
print "Command output : ", stdout
print "Command outerr : ", stderr
print "Command exit status/return code : ", p_status
and the output :
Command output :
Command outerr :
Command exit status/return code : 0
================================================================
Command output : PRINT FROM CMD_FILE
Command outerr :
Command exit status/return code : 0
Here is the code of the script which I created outside the code:
$ cat cmd_file
#!/usr/local/bin/tcsh -f
echo 'PRINT FROM CMD_FILE'
IF I check both files, they only difference is the print :
$ diff cmd_file_from_python cmd_file
2c2
< echo 'PRINT FROM CMD_FILE_FROM_PYTHON'
---
> echo 'PRINT FROM CMD_FILE'
Your file is empty while the program is running:
fd = open(file_name,"w")
fd.write("#!/usr/local/bin/tcsh -f\n")
fd.write("echo 'PRINT FROM CMD_FILE_FROM_PYTHON'\n")
fd.close
Note the lack of call parens on fd.close; you never actually closed the file, so the entire contents of the file were likely sitting in Python's buffer, and never go to disk until the program ends (when the CPython reference interpreter, as an implementation detail, goes through and cleans up globals, closing the open files for you by side-effect; it might never reach disk in another interpreter).
To fix, actually call close. Or even better, switch to the much safer with statement approach, where the close is implicit, and automatic, occurring even if an exception or return causes you to exit the code early:
with open(file_name, "w") as fd:
fd.write("#!/usr/local/bin/tcsh -f\n")
fd.write("echo 'PRINT FROM CMD_FILE_FROM_PYTHON'\n")
# No need to call close; file is guaranteed closed when you exit with block

How to check the status of a shell script using subprocess module in Python?

I have a simple Python script which will execute a shell script using subprocess mdoule in Python.
Below is my Python shell script which is calling testing.sh shell script and it works fine.
import os
import json
import subprocess
jsonData = '{"pp": [0,3,5,7,9], "sp": [1,2,4,6,8]}'
jj = json.loads(jsonData)
print jj['pp']
print jj['sp']
os.putenv( 'jj1', 'Hello World 1')
os.putenv( 'jj2', 'Hello World 2')
os.putenv( 'jj3', ' '.join( str(v) for v in jj['pp'] ) )
os.putenv( 'jj4', ' '.join( str(v) for v in jj['sp'] ) )
print "start"
subprocess.call(['./testing.sh'])
print "end"
And below is my shell script -
#!/bin/bash
for el1 in $jj3
do
echo "$el1"
done
for el2 in $jj4
do
echo "$el2"
done
for i in $( david ); do
echo item: $i
done
Now the question I have is -
if you see my Python script, I am printing start, then executing shell script and then printing end.. So suppose for whatever reason that shell script which I am executing has any problem, then I don't want to print out end.
So in the above example, shell script will not run properly as david is not a linux command so it will throw an error. So how should I see the status of entire bash shell script and then decide whether I need to print end or not?
I have just added a for loop example, it can be any shell script..
Is it possible to do?
You can check stderr of the bash script rather than return code.
proc = subprocess.Popen('testing.sh', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = proc.communicate()
if stderr:
print "Shell script gave some error"
else:
print "end" # Shell script ran fine.
Just use the returned value from call():
import subprocess
rc = subprocess.call("true")
assert rc == 0 # zero exit status means success
rc = subprocess.call("false")
assert rc != 0 # non-zero means failure
You could use check_call() to raise an exception automatically if the command fails instead of checking the returned code manually:
rc = subprocess.check_call("true") # <-- no exception
assert rc == 0
try:
subprocess.check_call("false") # raises an exception
except subprocess.CalledProcessError as e:
assert e.returncode == 1
else:
assert 0, "never happens"
Well, according to the docs, .call will return the exit code back to you. You may want to check that you actually get an error return code, though. (I think the for loop will still return a 0 code since it more-or-less finished.)

Catching runtime error for process created by python subprocess

I am writing a script which can take a file name as input, compile it and run it.
I am taking the name of a file as input(input_file_name). I first compile the file from within python:
self.process = subprocess.Popen(['gcc', input_file_name, '-o', 'auto_gen'], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
Next, I'm executing the executable using the same(Popen) call:
subprocess.Popen('./auto_gen', stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
In both cases, I'm catching the stdout(and stderr) contents using
(output, _) = self.process.communicate()
Now, if there is an error during compilation, I am able to catch the error because the returncode is 1 and I can get the details of the error because gcc sends them on stderr.
However, the program itself can return a random value even on executing successfully(because there might not be a "return 0" at the end). So I can't catch runtime errors using the returncode. Moreover, the executable does not send the error details on stderr. So I can't use the trick I used for catching compile-time errors.
What is the best way to catch a runtime error OR to print the details of the error? That is, if ./auto_gen throws a segmentation fault, I should be able to print either one of:
'Runtime error'
'Segmentation Fault'
'Program threw a SIGSEGV'
Try this. The code runs a subprocess which fails and prints to stderr. The except block captures the specific error exit code and stdout/stderr, and displays it.
#!/usr/bin/env python
import subprocess
try:
out = subprocess.check_output(
"ls non_existent_file",
stderr=subprocess.STDOUT,
shell=True)
print 'okay:',out
except subprocess.CalledProcessError as exc:
print 'error: code={}, out="{}"'.format(
exc.returncode, exc.output,
)
Example output:
$ python ./subproc.py
error: code=2, out="ls: cannot access non_existent_file: No such file or directory
"
If ./autogen is killed by a signal then self.process.returncode (after .wait() or .communicate()) is less than zero and its absolute value reports the signal e.g., returncode == -11 for SIGSERV.
please check following link for runtime errors or output of subprocess
https://www.endpoint.com/blog/2015/01/28/getting-realtime-output-using-python
def run_command(command):
process = subprocess.Popen(shlex.split(command),
stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print output.strip()
rc = process.poll()
return rc

Check output of a python command

I have a python script which tries to run an external command and look for result of the command. And it needs to use the value 'count=' from the output of the external command
COUNT_EXP = re.compile("count=(.*)")
cmd = [] # external command
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
result = COUNT_EXP.match(line)
if result:
print "count= " + result.group(1)
return int(result.group(1))
When I try to run my script, my external command ("cmd") got execute and I see count=10 in the shell. But why my python can't find that and print out "count= 10? in the 'if' clause above?
p = subprocess.Popen(['python','blah.py'],stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if len(line) != 0:
print "success" #if this code works, expanding to your regex should also work
I wrote the following C program:
#include "stdio.h"
int main() {
printf ("count=199");
return 0;
}
... which I called countOutput.c and the following Python script, modified from yours:
import subprocess, re
COUNT_EXP = re.compile("count=(.*)")
cmd = "./countOutput" # external command
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
result = COUNT_EXP.match(line)
if result:
print "count is equal to " + result.group(1)
... which I called countTest.py, and then ran:
$ python countTest.py
count is equal to 199
... which all works as expected. I'd therefore tend to agree with #kichik in thinking that the external command that you're using may be writing to stderr rather than stdout.
It might be printing that to stderr. Try redirecting that one to PIPE as well and read data from there. You can also append 2>&1 to the end of the command to get stderr redirected to stdout by the shell. You might have to add shell=True for that.

Categories