Run a program from python several times whitout initialize different shells - python

I want to run a compiled Fortran numerical model from Python. It is too complex to compile it using F2PY without implement several changes in the Fortran routines. This is why I am just calling its executable using the subprocess module.
The problem is that I have to call it few thousands of times, and I have the feeling that generating soo many shells is slowing the whole thing.
My implememtation (It is difficult to provide a reproducible example, sorry) looks like:
import os
import subprocess
foo_path = '/path/to/compiled/program/'
program_dir = os.path.join(foo_path, "FOO") #FOO is the Fortran executable
instruction = program_dir + " < nlst" #It is necesary to provide FOO a text file (nlst)
#with the configuration to the program
subprocess.call(instruction, shell=True, cwd=foo_path) #run the executable
Running it in this way (inside a loop), it works well and FOO generates a text file output that I can read from python. But I'd like to do the same keeping the shell active and just providing to it the "nlst" file path. Another nice option may be start an empty shell and keep it waiting for the instruction string, that will look like "./FOO < nlst". But I am not sure about how to do it, any ideas?
Thanks!
[Edited] Something like this should work but .comunicate ends process and a second call does not work:
from subprocess import Popen, PIPE
foo_path = '/path/to/FOO/'
process = Popen(['bash'], stdin=PIPE, cwd=foo_path)
process.communicate(b'./FOO < nlst')

I found this solution using the pexpect module,
import pexpect
import os.path
foo_path = '/path/to/FOO/'
out_path = '/path/to/FOO/foo_out_file' #path to output file
child = pexpect.spawn('bash', cwd=foo_path)
child.sendline('./FOO < nlst')
while not os.path.exists(out_path): #wait until out_path is created
continue

To extend my comment, here is an example for threading with your code:
import os
import subprocess
from concurrent.futures import ThreadPoolExecutor
foo_path = '/path/to/compiled/program/'
program_dir = os.path.join(foo_path, "FOO") #FOO is the Fortran executable
instruction = program_dir + " < nlst" #It is necesary to provide FOO a text file (nlst)
#with the configuration to the program
def your_function():
subprocess.call(instruction, shell=True, cwd=foo_path) #run the executable
# create executor object
executor = ThreadPoolExecutor(max_workers=4) # uncertain of how many workers you might need/want
# specify how often you want to run the function
for i in range(10):
# start your function as thread
executor.submit(your_function)

What I meant in my comment was something like the following Python script:
from subprocess import Popen, PIPE
foo_path = '/home/ronald/tmp'
process = Popen(['/home/ronald/tmp/shexecutor'], stdin=PIPE, cwd=foo_path)
process.stdin.write("ls\n")
process.stdin.write("echo hello\n")
process.stdin.write("quit\n")
And the shell script that executes the commands:
#!/bin/bash
while read cmdline; do
if [ "$cmdline" == "quit" ]; then
exit 0
fi
eval "$cmdline" >> x.output
done
Instead of doing an "eval", you can do virtually anything.
Note that this is just an outline of a real implementation.
You'd need to do some error handling. And if you are going to use this in a production environment, be sure to harden the code to the limit.

Related

How to monitor the subprocess object?

I need to convert a lot of files using mediapipe compiler which runs on bazel. There are hundreds of files so this process has to be automated. The command to be executed normally would be something like:
GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/multi_hand_tracking/multi_hand_tracking_cpu --input_video_path=/home/tony/Videos/HandWashDataset/Step_1/HandWash_001_A_01_G01.mp4
Where GLOG_logtostderr is the logger attached to the bazel program to output the log(result). I used redirect (2>a.txt) at the end to get the results to be written as txt file.
I have got the program working by writing a Python script using subprocess module.
import glob
import os
import time
import subprocess
files = glob.glob("/home/tony/Videos/HandWashDataset/Step_1/*.mp4")
os.chdir("/home/tony/mediapipe")
output = ""
for i in files:
print("process file {}".format(i))
output = (i[:len(i)-4]) + ".txt"
inputf = "--input_video_path=" + i
outputf = "2>" + output
f = open("blah.txt", "w")
sp = subprocess.call(["GLOG_logtostderr=1 bazel-bin/mediapipe/examples/desktop/multi_hand_tracking/multi_hand_tracking_cpu " + inputf], shell=True, stderr=f)
time.sleep(180)
f.close()
print("process finished")
The problem I am having is currently it seems to have no control on the process in each iteration. Since it is invoking another program in the script. The call to sp seems to be nearly instant but the actual conversion actually takes a few minutes. Without time.sleep, all instances of the bazel are launched at once and killed my computer. Is there a way to monitor process so I can convert one file at a time?
You should use subprocess.run :
That runs the command described by args,
waits for the command to complete,
then returns a CompletedProcess instance.
see also : https://docs.python.org/3/library/subprocess.html

Python subprocess always waits for programm [duplicate]

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

How to run some other program interactively from a python script

I am new to python. I would like to run a "EDA tool" from python interactively.
Here are the steps I wanted to follow:
Start the tool
Run the first command in the tool
Check for the first command output or parse (in the main pyton) script
Run the second command
Parse the output in python script
[...]
x. Exit the tool
x+1. Do some post processing in main pyhon script
I am looking for some information or pointers related to it so that I can read on my own.
This depends on what you mean by a "command". Is each command a separate process (in the operating-systems definition of that word)? If so, it sounds like you need the subprocess module.
import subprocess
execNamePlusArgs = [ 'ls', '-l' ] # unix-like (i.e. non-Windows) example
sp = subprocess.Popen( execNamePlusArgs, stdout=subprocess.PIPE, stderr=subprocess.PIPE )
stdout, stderr = sp.communicate() # this blocks until the process terminates
print( stdout )
If you don't want it to block until termination (e.g. if you want to feed the subprocess line-by-line input and examine its output line by line) then you would define stdin=subprocess.PIPE as well and then, instead of communicate, you might use calls to sp.stdin.writeline(whatever), sp.stdout.readline() and sp.stderr.readline()
You should look into using something like python-fabric
It allows you to use higher level language constructs such as context managers and makes the shell more usable with python in general.
Example usage:
from fabric.operations import local
from fabric.context_managers import lcd
with lcd(".."): # Prefix all commands with 'cd.. &&'
ls = local('ls',capture=True) # Run 'ls' command and put result into variable
print ls
>>>
[localhost] local: ls
Eigene Bilder
Eigene Musik
Eigene Videos
SynKernelDiag2015-11-07_10-01-13.log
desktop.ini
foo
scripts

Run a perl script from my python script, print the output and wait for it to finish

I have a python script which at one point is required to run a perl script, wait for it to finish, then continue.
As this case will only occur on a windows machine, I thought I could simply open a new cmd and run the perl script there, but I'm having difficulties doing so.
import os
os.system("start /wait cmd /c {timeout 10}")
should open a new cmd and sleep for 10 seconds, but it closes right away. I don't want to put the perl script in position of the timeout 10, as it is quite resource intensive.
Another idea was to use a subprocess with call or Popen and wait.
perl_script = subprocess.call(['script.pl', params])
But I'm not sure what would happen to the stdout of the perl script in such a case.
I know the location and the parameters of the perl script.
How can I run a perl script from my python script, print the output (a lot) and wait for it to finish?
edit:
As suggested by #rchang, I added the subprocess with communicate as following and it works just as intended.
import subprocess, sys
perl = "C:\\perl\\bin\\perl.exe"
perl_script "C:\\scripts\\perl\\flamethrower.pl"
params = " --mount-doom-hot"
pl_script = subprocess.Popen([perl, perl_script, params], stdout=sys.stdout)
pl_script.communicate()
These are my first lines of perl, just a quick copy/past script to test this.
print "Hello Perld!\n";
sleep 10;
print "Bye Perld!\n";
import subprocess
import sys
perl_script = subprocess.Popen(["script.pl", params], stdout=sys.stdout)
perl_script.communicate()
This should hook up the stdout of the subprocess to the stdout stream of the Python script, provided you won't actually need the Python script to output anything else meaningful during execution that may not be related to the subprocess output.
You could try:
perl_script = subprocess.check_output(["script.pl", params])
print perl_script

Can we execute multiple commands in the same command prompt launched by python

There is an excel file which contains the paths of multiple scenarios. I am using os.system(command) in a for loop. In each iteration, a path is read from excel file and executes the scenario for that path.
My problem is that every time, by using os.system(), a CMD opens, execute one command and close. In next iteration, again second path is read and execute it and CMD close. Here CMD pop-ups again and again. And the system is busy during that period and not able to do other task. I want to execute all the commands(scenario) in one CMD because I would like to minimize it and use the system for other task.
In each iteration, there are two main steps:
os.chdir(PATH)
os.system(path of exe+" "+name of config file that is present at PATH")
Can it be done by using subprocess. If yes please give me some example how it can be implemented?
If you want to use the subprocess module, try something like this :
from subprocess import call
import os.path
def call_scenario(path, config_file):
retcode = call(["path/of/exe", os.path.join(path,config_file)])
if retcode != 0:
print "Something bad happened : %s"%retcode
When using subprocess.call, the shell=False parameter will avoid to launch a cmd to do something.
it can be done - here is a quick example using multiprocessing (Python 2.6 or newer )
The example bellow uses a Unix command ("ls") and unixes paths ("/usr, etc...) but just replace those with your needed commands and paths.
from multiprocessing import Process
import os
paths = ["/tmp", "/usr", "/usr/include"]
def exec_(path):
p = Process()
p.run = lambda: os.system("ls %s" % path)
p.start()
for path in paths:
exec_(path)
Another option, if you need some sophisticated control on what is running, return codes, etc... is to make use of the Fabric project -
Although it is aimed at executingmultiple commands on different hosts using ssh - I think it culd be used for different paths on the same host.
URL for fabric:
http://docs.fabfile.org/en/1.3.3/index.html
To run c:\path\to\exe for all config.ini from each path simultaneously and to change current directory to cwd before it is executed:
from subprocess import Popen
processes = [Popen([r"c:\path\to\exe", "config.ini"], cwd=path) for path in paths]
for p in processes: p.wait()
If you don't want to run all commands in parallel then use subprocess.call() with the same arguments as for subprocess.Popen().

Categories