Python Not Waiting for MATLAB to Finish - python

I am interfacing a small MATLAB script with Python via the subprocess module. As follows:
cmd='(matlab -nosplash -nodesktop -r "optimizer;quit;")'
p = subprocess.Popen(cmd,stdin=None,stdout=None,shell=True)
#subprocess.Popen.wait(p)
#p.wait()
print "DONE?"
But "DONE" is being printed even before MATLAB starts! My entire code past it is breaking because of this.
I have tried:
Using os.system() calls (This is where I started, but I read on SO that its deprecated)
Using p.wait() and subprocess.Popen.wait. Both don't work.
Using a manual pause of 3 minutes (Max. time MATLAB takes to finish on average) Super Sloppy.
What am I missing?

Works fine for me:
import subprocess
retcode = subprocess.call(["matlab", "-nosplash", "-nodesktop", "-r", "quit;"])
print "DONE", retcode
Split the command arguments accordingly, use only options that you actually require (no need for shell=True, for example), use the function that directly does what you are after (call), i.e., call and wait for completion.
Depending on your installation (see http://www.mathworks.com/help/matlab/ref/matlabwindows.html), Matlab may be launched in a way such that it immediately quits. To handle that, add "-wait" to your argument list.

Start Matlab with the "-wait" flag. From the documenation:
"MATLAB is started by a separate starter program which normally launches MATLAB and then immediately quits. Using this option tells the starter program not to quit until MATLAB has terminated. This option is useful when you need to process the results from MATLAB in a script. Calling MATLAB with this option blocks the script from continuing until the results are generated."

Based on your response to my comment, let me answer your question with what I did for my application, that had a similar process to yours (albeit in C#). Instead of trying to force your process to wait for MATLAB to finish up (which is obviously not working right now), just wait for that CSV file to be written to. If you're worried about possibly having duplicates, then just append the current date and time to the end of the file, and that should do the trick.

Related

missing stdout before subprocess.Popen crash [duplicate]

I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.

Python file closes after program execution finishes when using os.startfile()

I have a program that produces a csv file and right at the end I am using os.startfile(fileName) but then due to the program finishing execution the opening file just closes also, same happens if I add a sleep after also, file loads up then once the sleep ends it closes again?
Any help would be appreciated.
From the documentation for os.startfile:
startfile() returns as soon as the associated application is launched. There is no option to wait for the application to close, and no way to retrieve the application’s exit status.
When using this function, there is no way to make your script wait for the program to complete because you have no way of knowing when it is complete. Because the program is being launched as a subprocess of your python script, the program will exit when the python script exits.
Since you don't say in your question exactly what the desired behavior is, I'm going to guess that you want the python script to block until the program finishes execution (as opposed to detaching the subprocess). There are multiple ways to do this.
Use the subprocess module
The subprocess module allows you to make a subprocess call that will not return until the subprocess completes. The exact call you make to launch the subprocess depends heavily on your specific situation, but this is a starting point:
subprocess.Popen(['start', fileName], shell=True)
Use input to allow user to close script
You can have your script block until the user tells the python script that the external program has closed. This probably requires the least modification to your code, but I don't think it's a good solution, as it depends on user input.
os.startfile(fileName)
input('Press enter when external program has completed...')

Parallel Python for loop

I work primarily with arcgis and pci flavours of python 2.7. I have a number of processes that I've created that run outside of these programs but use these libraries. They are run via .bat files through cmd.
Currently, they run the processes in a series of for loops. And each for loop processes sequentially. I was wondering if there was a way to run the processing within the for loop for each object in the list at the same time. That is in parallel. The only way I can think of this is opening a cmd for each object in the list, and running the processing separately.
Is what I am asking even possible? Where should I look for solutions?
Look into Subprocess So youd want a new commandline window created in the background where test.bat runs in parallel.. and in your case you don't want to wait for the command to complete before you continue your program, so use subprocess.Popen instead (may be something to look into as
well)
subprocess.call
Run the command described by args. Wait for command to complete, then return the returncode attribute.
If you want to start an external program from your python script pass the program's filename to subprocess.Popen() on Ubuntu Linux you would enter something like
>>>import subprocess
>>>subprocess.Popen('/usr/bin/gnome-...')
<subprocess.Popen Object at 0x7f2bcf93b20
The Return value is a Popen object which has two useful methods : poll() & wait()
poll() is like asking your friend if he has finished running the code you gave him.
wait() is like waiting for your friend to finish working on his code before you keep working on yours.(something you might want to look into)

Control executed programm with python

I want to execute a testrun via bash, if the test needs too much time. So far, I found some good solutions here. But since the command kill does not work properly (when I use it correctly it says it is not used correctly), I decided to solve this problem using python. This is the Execution call I want to monitor:
EXE="C:/program.exe"
FILE="file.tpt"
HOME_DIR="C:/Home"
"$EXE" -vm-Xmx4096M --run build "$HOME_DIR/test/$FILE" "Auslieferung (ML) Execute"
(The opened *.exe starts a testrun which includes some simulink simulation runs - sometimes there are simulink errors - in this case, the execution time of the tests need too long and I want to restart the entire process).
First, I came up with the idea, calling a shell script containing these lines within a subprocess from python:
import subprocess
import time
process = subprocess.Popen('subprocess.sh', shell = True)
time.sleep(10)
process.terminate()
But when I use this, *.terminate() or *.kill() does not close the program I started with the subprocess call.
That´s why I am now trying to implement the entire call in python language. I got the following so far:
import subprocess
file = "somePath/file.tpt"
p = subprocess.Popen(["C:/program.exe", file])
Now I need to know, how to implement the second call "Auslieferung (ML) Execute" of the bash function. This call starts an intern testrun named "Auslieferung (ML) Execute". Any ideas? Or is it better to choose one of the other ways? Or can I get the "kill" option for bash somewhere, somehow?

Python and Scheduling Computation

I wish to schedule a computation to occur after my current computation in Python is finished. Note that my Python interpreter is running through emacs.
For example I am currently running:
>>> for i in range(2, 5):
... tn.TweetNetwork.create_subnetworks(i)
...
I made a simple mistake and meant to type range(1,5). This has been running for at least 4 hours and should run for another few hours. That being said I do not want to re-execute the loop with the correction and lose all that has been computed.
As I am not by the computer 24/7, how can I schedule Python to execute the function `tn.TweetNetwork.create_subnetworks(1)?
I use emacs 24.3 and ubuntu 12.04 LTS, let me know if you need more information. All help is greatly appreciated!
EDIT: I like the answer posted, however I do not know how to find the PID. I am running a Python interpreter through emacs. So how would I find that out?
This was too much for the comment, but this isn't a complete reply.
To get a process started by Emacs:
M-x list-processes,
identify the process you want to get the id of
M-:(process-id (get-process "name-of-the-process")).
But this will give you the process of the interpreter, not any other process started from it.
If you then need to get all processes spawned through that process, you can do:
$ pstree PID
Where PID is the one you obtained earlier from Emacs.
I think, the easiest way is to write another script that wait until your process finished and runs tn.TweetNetwork.create_subnetworks(1). This will work only if your create_subnetworks does not access any global variables and does and write all results into database/file/etc.
# Write script similar to these
import os, time
print "Wait until old script completed..."
while os.path.exists("/proc/SCRIPT_PID"):
time.sleep(1)
print "Execute create_subnetworks..."
tn = ...
tn.TweetNetwork.create_subnetworks(1)
Connect to your computer by SSH, get process id by ps axu | grep script_name and run this new script.
If Tyler comment does not help, you may eval the following piece of code:
(defun foo (ignored)
(remove-hook 'comint-output-filter-functions 'foo)
(run-with-timer 1 nil (lambda()
(goto-char (point-max))
(insert "tn.TweetNetwork.create_subnetworks(1)")
(comint-send-input))))
(add-hook 'comint-output-filter-functions 'foo)
It defines a function that will insert the command you need to insert in the python inferior buffer, a second after the invocation of that function (the delay is for avoid recursive loops).
Then it setup the invocation of that function upon the event where the inferior process (python, in your case) writes anything. In your case, that would be the ">>>" prompt, that python writes when ready. If your code is generating output, this approach won't work.
If you are using comint in other buffers (shell, sql, ...) you would need to make variable comint-output-filter-functions local to your python interactive buffer (with make-variable-buffer-local)

Categories