Python print buffering - python

Let me rephrase my previous question.
I just created a tool in ArcGIS using pythong as script language. The tool executes (runs) an outside program using the subprocess.popen. When I run the tool from ArcGSIS, a window appears that only shows the following.
Executing: RunFLOW C:\FLOW C:\FLOW\FLW.bat
Start Time: Mon Nov 30 16:50:37 2009
Running script RunFLOW...
Completed script RuFLOW...
Executed (RunFLOW) successfully.
End Time: Mon Nov 30 16:50:48 2009 (Elapsed Time: 11.00 seconds)
The script is as follows
# Import system modules
import sys, string, os, arcgisscripting, subprocess
# Create the Geoprocessor object
gp = arcgisscripting.create()
# Read the parameter values:
# 1: input workspace
prj_fld = gp.GetParameterAsText(0)
Flow_bat = gp.GetParameterAsText(1)
os.chdir(prj_fld)
p=subprocess.Popen(Flow_bat,shell=True,stdout=subprocess.PIPE)
stdout_value = p.communicate()[0]
print '\tstdout:', repr(stdout_value)
When I run the same program from command window, it prints a screen full of information (date, number of iteration, etc.). I want to see all this information in the window that appears after I run the model from ArcGIS in addition to what it is being printing right now.
I tried print, communicate, flush but couldn't be able to do it. Any suggestions?
When I run the script as it is right now, it runs the executable but it gives an error as follows
ERROR 999998: There are no more files.
Thanks

I know nothing about ArcGIS, so I may be shooting in the dark here, but...if you want stdout, you usually don't want the communicate() method. You want something like this:
p=subprocess.Popen(Flow_bat,shell=True,stdout=subprocess.PIPE)
stdout_value = p.stdout.read()
The communicate() method is used for interacting with a process. From the documentation:
Interact with process: Send data to stdin. Read data from stdout
and stderr, until end-of-file is reached. Wait for process to
terminate.
I'm guessing that when ArcGIS runs your script that stdin is not connected and causes the script to exit for some reason.

I'm not sure if you are aware of this or not but I use:
gp.AddMessage('blah blah')
to get messages to appear in the ArcGIS processing window. Its a method of the geoprocessor object, the library you import in Python to access the ArcGIS engine. You would already have imported this library if you are doing any geoprocessing.

Related

How to end a python subprocess with no return?

I'm working on a BCP wrapper method in Python, but have run into an issue invoking the command with subprocess.
As far as I can tell, the BCP command doesn't return any value or indication that it has completed outside of what it prints to the terminal window, which causes subprocess.call or subprocess.run to hang while they wait for a return.
subprocess.Popen allows a manual .terminate() method, but I'm having issues getting the table to write afterwards.
The bcp command works from the command line with no issues, it loads data from a source csv according to a .fmt file and writes an error log file. My script is able to dismount the file from log path, so I would consider the command itself irrelevant and the question to be around the behavior of the subprocess module.
This is what I'm trying at the moment:
process = subprocess.Popen(bcp_command)
try:
path = Path(log_path)
sleep_counter = 0
while path.is_file() == False and sleep_counter < 16:
sleep(1)
sleep_counter +=1
finally:
process.terminate()
self.datacommand = datacommand
My idea was to check that the error log file has been written by the bcp command as a way to tell that the process had finished, however while my script no longer freezes with this, and the files are apparently being successfully written and dismounted later on in the script. The script terminates in less than the 15 seconds that the sleep loop would use to end it as well.
When the process froze my Spyder shell (and Idle, so it's not the IDE), I could force terminate it by closing the console itself and it would write to the server at least.
However it seems like by using the .terminate() the command isn't actually writing anything to the server.
I checked if a dumb 15 second time-out (it takes about 2 seconds to do the BCP with this data) would work as well, in case it was writing an error log before the load finished.
Still resulted in an empty table on SQL server.
How can I get subprocess to execute a command without hanging?
Well, it seems to be a more general issue about calling helper functions with Popen
as seen here:
https://github.com/dropbox/pyannotate/issues/67
I was able to fix the hanging issue by changing it to:
subprocess.Popen(bcp_command, close_fds = True)

How to execute another python file and then close the existing one?

I am working on a program that requires to call another python script and truncate the execution of the current file. I tried doing the same using the os.close() function. As follows:
def call_otherfile(self):
os.system("python file2.py") #Execute new script
os.close() #close Current Script
Using the above code I am able to open the second file but am unable to close the current one.I know I am silly mistake but unable to figure out what's it.
To do this you will need to spawn a subprocess directly. This can either be done with a more low-level fork and exec model, as is traditional in Unix, or with a higher-level API like subprocess.
import subprocess
import sys
def spawn_program_and_die(program, exit_code=0):
"""
Start an external program and exit the script
with the specified return code.
Takes the parameter program, which is a list
that corresponds to the argv of your command.
"""
# Start the external program
subprocess.Popen(program)
# We have started the program, and can suspend this interpreter
sys.exit(exit_code)
spawn_program_and_die(['python', 'path/to/my/script.py'])
# Or, as in OP's example
spawn_program_and_die(['python', 'file2.py'])
Also, just a note on your original code. os.close corresponds to the Unix syscall close, which tells the kernel that your program that you no longer need a file descriptor. It is not supposed to be used to exit the program.
If you don't want to define your own function, you could always just call subprocess.Popen directly like Popen(['python', 'file2.py'])
Use the subprocess module which is the suggested way to do that kind of stuff (execute new script, process), in particular look at Popen for starting a new process and to terminate the current program you can use sys.exit().
Its very simple use os.startfile and after that use exit() or sys.exit() it will work 100%
#file 1 os.startfile("file2.py") exit()

Can't kill a running subprocess using Python on Windows

I have a Python script that runs all day long checking time every 60 seconds so it can start/end tasks (other python scripts) at specific periods of the day.
This script is running almost all ok. Tasks are starting at the right time and being open over a new cmd window so the main script can keep running and sampling the time. The only problem is that it just won't kill the tasks.
import os
import time
import signal
import subprocess
import ctypes
freq = 60 # sampling frequency in seconds
while True:
print 'Sampling time...'
now = int(time.time())
#initialize the task.. lets say 8:30am
if ( time.strftime("%H:%M", time.localtime(now)) == '08:30'):
# The following method is used so python opens another cmd window and keeps original script running and sampling time
pro = subprocess.Popen(["start", "cmd", "/k", "python python-task.py"], shell=True)
# kill process attempts.. lets say 11:40am
if ( time.strftime("%H:%M", time.localtime(now)) == '11:40'):
pro.kill() #not working - nothing happens
pro.terminate() #not working - nothing happens
os.kill(pro.pid, signal.SIGINT) #not working - windows error 5 access denied
# Kill the process using ctypes - not working - nothing happens
ctypes.windll.kernel32.TerminateProcess(int(pro._handle), -1)
# Kill process using windows taskkill - nothing happens
os.popen('TASKKILL /PID '+str(pro.pid)+' /F')
time.sleep(freq)
Important Note: the task script python-task.py will run indefinitely. That's exactly why I need to be able to "force" kill it at a certain time while it still running.
Any clues? What am I doing wrong? How to kill it?
You're killing the shell that spawns your sub-process, not your sub-process.
Edit: From the documentation:
The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.
Warning
Passing shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
So, instead of passing a single string, pass each argument separately in the list, and eschew using the shell. You probably want to use the same executable for the child as for the parent, so it's usually something like:
pro = subprocess.Popen([sys.executable, "python-task.py"])

Needing Help Capturing STDOUT in real-time from pygtk GUI in python project

I am working on a project in which I want to use the command line utility cdparanoia from a python pygtk GUI. I'm using Glade for UI development. I have tried importing subprocess and using subprocess.Popen. It works, but it freezes my GUI (won't even allow repainting of the windows) while the process is executing. Not a very nice interaction for the user. How can I prevent this behaviour? I would like put a cancel button on the window but this would work as it "freezes" the program. Ultimately, I would like to capture stderr (as below, audio info is piped to sox via stdout) and present it in as a gtk.Expander with a similar look to Synaptic when it is installing a program with the ability of the user to see things happening in real time. Also, I would like to use the text from the progress indicator (as seen below) to build a real progress indicator widget. How can I get a shell to pass info back to python in real-time rather than once the process is finished (when it gives it all as one big info dump)?
Real-time info needing captured:
Working on me - me - DISK 01.flac
cdparanoia III release 10.2 (September 11, 2008)
Ripping from sector 0 (track 1 [0:00.00])
to sector 325195 (track 15 [1:56.70])
outputting to stdout
(== PROGRESS == [> | 004727 00 ] == :-) O ==)
Here is the code I've used so far:
quick = " -Z" if self.quick == True else ""
command = "cdparanoia -w%s 1- -| sox -t wav - \"%s - %s - DISK %s%s.flac\"" %\
(
quick,
self.book_name.replace(" ", "_"),
self.author_name.replace(" ", "_"),
"0" if disc < 10 else "",
disc
)
print command
shell = subprocess.Popen(command, shell=True, executable="/bin/bash",
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
data, err = shell.communicate(command)
With Thanks,
Narnie
I wrote a Python shell implementation once, and it did run wget and the actual Python console with fully functional output.
You need to use subprocess.Popen and write directly to sys.stdout:
process = subprocess.Popen(shlex.split(command), stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
complete = False
while True:
output = process.stdout.read(1)
if output == '' and process.poll() != None:
break
if output != '':
sys.stdout.write(output)
sys.stdout.flush()
If you write a GUI program which reads from a file handle you have two use a dispatcher to integrate the file descriptor events into the GUI event loop. A general description of event loops can be found at Wikipedia. The specific description for Gtk+ can be found in the reference.
Solution for your problem: use the function g_io_add_watch to integrate your action into the main event loop. Here is an example in C. Python should be analogous.
Yes there are 2 issues here.
The first is that you'll need to specify a read timeout, so that
you don't block until the sub process is finished.
The second is that there is probably buffering happening that is not desirable.
To address the first issue, and read from the sub process asynchronously
you could try my subProcess module, with a timeout:
http://www.pixelbeat.org/libs/subProcess.py
Note this is simple but also old and linux only.
It was used as a basis for the new python subprocess module,
so you'd be better to go with that if you need portability.
To understand/control any additional buffering which may happen, see:
http://www.pixelbeat.org/programming/stdio_buffering/

Python and subprocess input piping

I have a small script that launches and, every half hour, feeds a command to a java program (game server manager) as if the user was typing it. However, after reading documentation and experimenting, I can't figure out how I can get two things:
1) A version which allows the user to type commands into the terminal windoe and they will be sent to the server manager input just as the "save-all" command is.
2) A version which remains running, but sends any new input to the system itself, removing the need for a second terminal window. This one is actually half-happening right now as when something is typed, there is no visual feedback, but once the program is ended, it's clear the terminal has received the input. For example, a list of directory contents will be there if "dir" was typed while the program was running. This one is more for understanding than practicality.
Thanks for the help. Here's the script:
from time import sleep
import sys,os
import subprocess
# Launches the server with specified parameters, waits however
# long is specified in saveInterval, then saves the map.
# Edit the value after "saveInterval =" to desired number of minutes.
# Default is 30
saveInterval = 30
# Start the server. Substitute the launch command with whatever you please.
p = subprocess.Popen('java -Xmx1024M -Xms1024M -jar minecraft_server.jar',
shell=False,
stdin=subprocess.PIPE);
while(True):
sleep(saveInterval*60)
# Comment out these two lines if you want the save to happen silently.
p.stdin.write("say Backing up map...\n")
p.stdin.flush()
# Stop all other saves to prevent corruption.
p.stdin.write("save-off\n")
p.stdin.flush()
sleep(1)
# Perform save
p.stdin.write("save-all\n")
p.stdin.flush()
sleep(10)
# Allow other saves again.
p.stdin.write("save-on\n")
p.stdin.flush()
Replace your sleep() with a call to select((sys.stdin, ), (), (), saveInterval*60) -- that will have the same timeout but listens on stdin for user commands. When select says you have input, read a line from sys.stdin and feed it to your process. When select indicates a timeout, perform the "save" command that you're doing now.
It won't completely solve your problem, but you might find python's cmd module useful. It's a way of easily implementing an extensible command line loop (often called a REPL).
You can run the program using screen, then you can send the input to the specific screen session instead of to the program directly (if you are in Windows just install cygwin).

Categories