loop over a batch script that does not terminate - python

I m trying to execute several batch-scripts in a python loop. However the said bat-scripts contain cmd /K and thus do not "terminate" (for lack of a better word). Therefore python calls the first script and waits forever...
Here is a pseudo-code that gives an idea of what I am trying to do:
import subprocess
params = [MYSCRIPT, os.curdir]
for level in range(10):
subprocess.call(params)
My question is: "Is there a pythonic solution to get the console command back and resume looping?"
EDIT: I am now aware that it is possible to launch child processes and continue without waiting for them to return, using
Popen(params,shell=False,stdin=None,stdout=None,stderr=None,close_fds=True)
However this would launch my entire loop almost simultaneously. Is there a way to wait for the child process to execute its task and return when it hits the cmd /K and becomes idle.

There is no built in way, but it's something you can implement.
Examples are with bash since I don't have access to a Windows machine, but should be similar for cmd \K
It might be as easy as:
import subprocess
# start the process in the background
process = subprocess.Popen(
['bash', '-i'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE
)
# will throw IO error if process terminates by this time for some reason
process.stdin.write("exit\n")
process.wait()
This will send an exit command to the shell, which should be processed just as the script terminates causing it to exit ( effectively canceling out the \K )
Here's a more elaborate answer in case you need a solution that checks for some output:
import subprocess
# start the process in the background
process = subprocess.Popen(
['bash', '-i'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE
)
# Wait for the process to terminate
process.poll()
while process.returncode is None:
# read the output from the process
# note that can't use readlines() here as it would block waiting for the process
lines = [ x for x in process.stdout.read(5).split("\n") if x]
if lines:
# if you want the output to show, you'll have to print it yourself
print(lines )
# check for some condition in the output
if any((":" in x for x in lines)):
# terminate the process
process.kill()
# alternatively could send it some input to have it terminate
# process.stdin.write("exit\n")
# Check for new return code
process.poll()
The complication here is with reading the output, as if you try to read more than is available, the process will block.

Here is something I use where I start a bunch of processes (2 in this example) and wait for them at the end before the program terminates. It can be modified to wait for specific processes at different times (see comments). In this example one process prints out the %path% and the other prints the directory contents.
import win32api, win32con, win32process, win32event
def CreateMyProcess2(cmd):
''' create process width no window that runs sdelete with a bunch of arguments'''
si = win32process.STARTUPINFO()
info = win32process.CreateProcess(
None, # AppName
cmd, # Command line
None, # Process Security
None, # Thread Security
0, # inherit Handles?
win32process.NORMAL_PRIORITY_CLASS,
None, # New environment
None, # Current directory
si) # startup info
return info[0]
# info is tuple (hProcess, hThread, processId, threadId)
if __name__ == '__main__' :
handles = []
cmd = 'cmd /c "dir/w"'
handle = CreateMyProcess2(cmd)
handles.append(handle)
cmd = 'cmd /c "path"'
handle = CreateMyProcess2(cmd)
handles.append(handle)
rc = win32event.WaitForMultipleObjects(
handles, # sequence of objects (here = handles) to wait for
1, # wait for them all (use 0 to wait for just one)
15000) # timeout in milli-seconds
print rc
# rc = 0 if all tasks have completed before the time out
Approximate Output (edited for clarity):
PATH=C:\Users\Philip\algs4\java\bin;C:\Users\Philip\bin;C:\Users\Philip\mksnt\ etc......
Volume in drive C has no label.
Volume Serial Number is 4CA0-FEAD
Directory of C:\Users\Philip\AppData\Local\Temp
[.]
[..]
FXSAPIDebugLogFile.txt
etc....
1 File(s) 0 bytes
3 Dir(s) 305,473,040,384 bytes free
0 <-- value of "rc"

Related

subprocess.PIPE prevents executable from closing

Why subprocess.PIPE prevents a called executable from closing.
I use the following script to call an executable file with a number of inputs:
import subprocess, time
CREATE_NO_WINDOW = 0x08000000
my_proc = subprocess.Popen("myApp.exe " + ' '.join([str(input1), str(input2), str(input3)]),
startupinfo=subprocess.STARTUPINFO(), stdout=subprocess.PIPE,
creationflags = CREATE_NO_WINDOW)
Then I monitor if the application has finished within a given time (300 seconds) and if not I just kill it. I also read the output of the application to know whether it failed in doing the required tasks.
proc_wait_time = 300
start_time = time.time()
sol_status = 'Fail'
while time.time() - start_time < proc_wait_time:
if (my_proc.poll() is None):
time.sleep(1)
else:
try:
sol_status = my_proc.stdout.read().replace('\r\n \r\n','')
break
except:
sol_status = 'Fail'
break
else:
try: my_proc.kill()
except: None
sol_status = 'Frozen'
if sol_status in ['Fail', 'Frozen']:
print ('Failed running my_proc')
As you can note from the code I need to wait for myApp.exe to finish, however, sometimes myApp.exe freezes. Since the script above is part of a loop, I need to identify such a situation (by a timer), keep track of it and kill myApp.exe so that the whole script doesn't get stuck!
Now, the issue is that if I use subprocess.PIPE (which I suppose I have to if I want read the output of the application) then myApp.exe doesn't close after finishing and consequently my_proc.poll() is None is always True.
I am using Python 2.7.
There was a pipe buffer limit/bug in case of huge amounts of data written to subprocess.PIPE. The easiest way to fix it is to pipe the data directly into a file:
_stdoutHandler = open('C:/somePath/stdout.log', 'w')
_stderrHandler = open('C:/somePath/stderr.log', 'w')
my_proc = subprocess.Popen(
"myApp.exe " + ' '.join([str(input1), str(input2), str(input3)]),
stdout=_stdoutHandler,
stderr=_stderrHandler,
startupinfo=subprocess.STARTUPINFO(),
creationflags=CREATE_NO_WINDOW
)
...
_stdoutHandler.close()
_stderrHandler.close()

Python3/Linux - Open text file in default editor and wait until done

I need to wait until the user is done editing a text file in the default graphical application (Debian and derivates).
If I use xdg-open with subprocess.call (which usually waits) it will continue after opening the file in the editor. I assume because xdg-open itself starts the editor asynchronously.
I finally got a more or less working code by retrieving the launcher for the text/plain mime-type and use that with Gio.DesktopAppInfo.new to get the command for the editor. Provided that the editor is not already open in which case the process ends while the editor is still open.
I have added solutions checking the process.pid and polling for the process. Both end in an indefinite loop.
It seems such a overly complicated way to wait for the process to finish. So, is there a more robust way to do this?
#! /usr/bin/env python3
import subprocess
from gi.repository import Gio
import os
from time import sleep
import sys
def open_launcher(my_file):
print('launcher open')
app = subprocess.check_output(['xdg-mime', 'query', 'default', 'text/plain']).decode('utf-8').strip()
print(app)
launcher = Gio.DesktopAppInfo.new(app).get_commandline().split()[0]
print(launcher)
subprocess.call([launcher, my_file])
print('launcher close')
def open_xdg(my_file):
print('xdg open')
subprocess.call(['xdg-open', my_file])
print('xdg close')
def check_pid(pid):
""" Check For the existence of a unix pid. """
try:
os.kill(int(pid), 0)
except OSError:
return False
else:
return True
def open_pid(my_file):
pid = subprocess.Popen(['xdg-open', my_file]).pid
while check_pid(pid):
print(pid)
sleep(1)
def open_poll(my_file):
proc = subprocess.Popen(['xdg-open', my_file])
while not proc.poll():
print(proc.poll())
sleep(1)
def open_ps(my_file):
subprocess.call(['xdg-open', my_file])
pid = subprocess.check_output("ps -o pid,cmd -e | grep %s | head -n 1 | awk '{print $1}'" % my_file, shell=True).decode('utf-8')
while check_pid(pid):
print(pid)
sleep(1)
def open_popen(my_file):
print('popen open')
process = subprocess.Popen(['xdg-open', my_file])
process.wait()
print(process.returncode)
print('popen close')
# This will end the open_xdg function while the editor is open.
# However, if the editor is already open, open_launcher will finish while the editor is still open.
#open_launcher('test.txt')
# This solution opens the file but the process terminates before the editor is closed.
#open_xdg('test.txt')
# This will loop indefinately printing the pid even after closing the editor.
# If you check for the pid in another terminal you see the pid with: [xdg-open] <defunct>.
#open_pid('test.txt')
# This will print None once after which 0 is printed indefinately: the subprocess ends immediately.
#open_poll('test.txt')
# This seems to work, even when the editor is already open.
# However, I had to use head -n 1 to prevent returning multiple pids.
#open_ps('test.txt')
# Like open_xdg, this opens the file but the process terminates before the editor is closed.
open_popen('test.txt')
Instead of trying to poll a PID, you can simply wait for the child process to terminate, using subprocess.Popen.wait():
Wait for child process to terminate. Set and return returncode attribute.
Additionally, getting the first part of get_commandline() is not guaranteed to be the launcher. The string returned by get_commandline() will match the Exec key spec, meaning the %u, %U, %f, and %F field codes in the returned string should be replaced with the correct values.
Here is some example code, based on your xdg-mime approach:
#!/usr/bin/env python3
import subprocess
import shlex
from gi.repository import Gio
my_file = 'test.txt'
# Get default application
app = subprocess.check_output(['xdg-mime', 'query', 'default', 'text/plain']).decode('utf-8').strip()
# Get command to run
command = Gio.DesktopAppInfo.new(app).get_commandline()
# Handle file paths with spaces by quoting the file path
my_file_quoted = "'" + my_file + "'"
# Replace field codes with the file path
# Also handle special case of the atom editor
command = command.replace('%u', my_file_quoted)\
.replace('%U', my_file_quoted)\
.replace('%f', my_file_quoted)\
.replace('%F', my_file_quoted if app != 'atom.desktop' else '--wait ' + my_file_quoted)
# Run the default application, and wait for it to terminate
process = subprocess.Popen(
shlex.split(command), stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
process.wait()
# Now the exit code of the text editor process is available as process.returncode
I have a few remarks on my sample code.
Remark 1: Handling spaces in file paths
It is important the file path to be opened is wrapped in quotes, otherwise shlex.split(command) will split the filename on spaces.
Remark 2: Escaped % characters
The Exec key spec states
Literal percentage characters must be escaped as %%.
My use of replace() then could potentially replace % characters that were escaped. For simplicity, I chose to ignore this edge case.
Remark 3: atom
I assumed the desired behaviour is to always wait until the graphical editor has closed. In the case of the atom text editor, it will terminate immediately on launching the window unless the --wait option is provided. For this reason, I conditionally add the --wait option if the default editor is atom.
Remark 4: subprocess.DEVNULL
subprocess.DEVNULL is new in python 3.3. For older python versions, the following can be used instead:
with open(os.devnull, 'w') as DEVNULL:
process = subprocess.Popen(
shlex.split(command), stdout=DEVNULL, stderr=DEVNULL)
Testing
I tested my example code above on Ubuntu with the GNOME desktop environment. I tested with the following graphical text editors: gedit, mousepad, and atom.

Python Subprocess not printing vnstat process

Thank you for taking the time to read this post.
I'm trying to do live bandwidth monitoring in python using vnstat. Unfortunately, It is not printing the output that i want, and i cant seem to figure out why. This is my code below.
from subprocess import Popen, PIPE
import time
def run(command):
process = Popen(command, stdout=PIPE, bufsize=1, shell=True ,universal_newlines=True)
while True:
line = process.stdout.readline().rstrip()
print(line)
if __name__ == "__main__":
run("sudo vnstat -l -i wlan1")
When i run this code in the terminal , this is the output i get :
sudo python testingLog.py
Monitoring wlan1... (press CTRL-C to stop)
It does not show me the desired output when running "vnstat -l -i wlan1" in the terminal.
Desired Output :
Monitoring wlan1... (press CTRL-C to stop)
rx: 0 kbit/s 0 p/s tx: 0 kbit/s 0 p/s
What happens when i run vnstat -l -i wlan1 is that it will update it and be running live, so i suspect that my printing is wrong as it does not print the desired output but i cant seem to figure out why.
It's not that your printing is wrong, it's the fact that vnstat keeps updating the same line without issuing a new line so process.stdout.readline() hangs at one point, waiting for a new line that never comes.
If you just want to redirect vnstat STDOUT to Python's STDOUT you can just pipe it, i.e.:
import subprocess
import sys
import time
proc = subprocess.Popen(["vnstat", "-l", "-i", "wlan1"], stdout=sys.stdout)
while proc.poll() is None: # loop until the process ends (kill vnstat to see the effect)
time.sleep(1) # wait a second...
print("\nProcess finished.")
If you want to capture the output and deal with it yourself, however, you'll have to stream the STDOUT a character (or safe buffer) at the time to capture whatever vnstat publishes and then decide what to do with it. For example, to simulate the above piping but with you in the driver's seat you can do something like:
import subprocess
import sys
proc = subprocess.Popen(["vnstat", "-l", "-i", "wlan1"], stdout=subprocess.PIPE)
while True: # a STDOUT read loop
output = proc.stdout.read(1) # grab one character from vnstat's STDOUT
if output == "" and proc.poll() is not None: # process finished, exit the loop
break
sys.stdout.write(output) # write the output to Python's own STDOUT
sys.stdout.flush() # flush it...
# of course, you can collect the output instead of printing it to the screen...
print("\nProcess finished.")

Make python script processing large number of files faster

I have written a python script which takes input as a directory and lists all files in that directory, it then decompresses each of these files and does some extra processing it. The code is very straightforward, uses a list of files from os.listdir( directory ) and for each file in the list it decompresses it and then executes a bunch of different system calls on it. My question is , is there any way to make the loop executions parallel or make the code run faster leveraging the cores on the cpu, and what might that be, below is some demo code to depict what I am aiming to optimize :
files = os.listdir( directory )
for file in files:
os.system( "tar -xvf %s" %file )
os.system( "Some other sys call" )
os.system( "One more sys call" )
EDIT: The sys calls are the only way possible since I am using certain CLI custom made utilities that expect input as decompressed files, hence the decompression.
Note os.system() is synchronous, i.e. python waits for the task to complete before going to the next line.
Here is a simplification of what I do on Windows 7 and Python 2.66
You should be able to easily modify this for your needs.
1. create and run a process for each task I want to run in parallel
2. after they are all started I wait for them to complete
import win32api, win32con, win32process, win32event
def CreateMyProcess2(cmd):
''' create process width no window that runs a task or without arguments'''
si = win32process.STARTUPINFO()
info = win32process.CreateProcess(
None, # AppName
cmd, # Command line
None, # Process Security
None, # Thread Security
0, # inherit Handles?
win32process.NORMAL_PRIORITY_CLASS,
None, # New environment
None, # Current directory
si) # startup info
return info[0]
# info is tuple (hProcess, hThread, processId, threadId)
if __name__ == '__main__' :
handles = []
cmd = 'cmd /c "dir/w"'
handle = CreateMyProcess2(cmd)
handles.append(handle)
cmd = 'cmd /c "path"'
handle = CreateMyProcess2(cmd)
handles.append(handle)
rc = win32event.WaitForMultipleObjects(
handles, # sequence of objects (here = handles) to wait for
1, # wait for them all (use 0 to wait for just one)
15000) # timeout in milli-seconds
print rc
# rc = 0 if all tasks have completed before the time out

multiprocessing.Process subprocess.Popen completed?

I have a server that launches command line apps. They receive a local file path, load a file, export something, then close.
It's working, but I would like to be able to keep track of which tasks are active and which completed.
So with this line:
p = mp.Process(target=subprocess.Popen(mayapy + ' -u ' + job.pyFile), group=None)
I have tried 'is_alive', and it always returns False.
The subprocess closes, I see it closed in task manager, but the process and pid still seem queryable.
Your use of mp.Process is wrong. The target should be a function, not the return value of subprocess.Popen(...).
In any case, if you define:
proc = subprocess.Popen(mayapy + ' -u ' + job.pyFile)
Then proc.poll() will be None while the process is working, and will equal a return value (not None) when the process has terminated.
For example, (the output is in the comments)
import subprocess
import shlex
import time
PIPE = subprocess.PIPE
proc = subprocess.Popen(shlex.split('ls -lR /'), stdout=PIPE)
time.sleep(1)
print(proc.poll())
# None
proc.terminate()
time.sleep(1)
print(proc.poll())
# -15

Categories