Capturing stdout from subprocess after sending SIGINT - python

I have a dtrace snippet run via python script and the dtrace snippet is such that it generates data when CTRL-C is issued to it. So I had a signal_handler defined in the python script to catch CTRL-C from user and relay this to the dtrace invocation done via subprocess.Popen but I am unable to get any output in my log file. Here is the script:
Proc = []
signal_posted = False
def signal_handler(sig, frame):
print("Got CTRL-C!")
global signal_posted
signal_posted = True
global Proc
Proc.send_signal(signal.SIGINT) #Signal posting from handler
def execute_hotkernel():
#
# Generate the .out output file
#
fileout = "hotkernel.out"
fileo = open(fileout, "w+")
global Proc
Proc = subprocess.Popen(['/usr/sbin/dtrace', '-n', dtrace_script], stdout = fileo)
while Proc.poll() is None:
time.sleep(0.5)
def main():
signal.signal(signal.SIGINT, signal_handler) # Change our signal handler
execute_hotkernel()
if __name__ == '__main__':
main()
Since I have a file hotkernel.out set in subprocess.Popen command for stdout I was expecting the output from dtrace to be redirected to hotkernel.out on doing a CTRL-C but it is empty. What is missing here?

I have a similar issue.
In my case, it's a shell script that runs until you hit Control-C, and then prints out summary information. When I run this using subprocess.Popen, whether using a PIPE or a file object for stdout, I either don't get the information (with a file object) or it hangs when I try to run stdout.readline().
I finally tried running the subprocess from the interpreter and discovered I could get the last line of output after the SIGINT with a PIPE if I call stdout.readline() (where it hangs) and hit Control-C (in the interpreter), and then call stdout.readline() again.
I do not know how to emulate this in script, for a file output or for a PIPE. I did not try the file output in the interpreter.
EDIT:
I finally got back to this and determined, it's actually pretty easy to emulate outside of python and really has nothing to do with python.
/some_cmd_that_ends_on_sigint
(enter control-c)
*data from stdout in event handler*
Works
/some_cmd_that_ends_on_sigint | tee some.log
(enter control-c)
*Nothing sent to stdout in event handler prints to the screen or the log*
Where's my log?
I ended up just adding a file stream in the event handler (in the some_cmd_that_ends_on_sigint source) that writes the data to a (possibly secondary) log. Works, if a bit awkward. You get the data on the screen if running without any piping, but I can also read it when piped or from python from the secondary log.

Related

How to run an EXE program in the background and get the outuput in python

I want to run an exe program in the background
Let's say the program is httpd.exe
I can run it but when I want to get the outupt It get stuck becuase there is no output if It starts successfully. But if there is an error It's OK.
Here is the code I'm using:
import asyncio
import os
os.chdir('c:\\apache\\bin')
process, stdout, stderr = asyncio.run(run('httpd.exe'))
print(stdout, stderr)
async def run(cmd):
proc = await asyncio.create_subprocess_exec(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await proc.communicate()
return (proc, stdout, stderr)
I tried to make the following code as general as possible:
I make no assumptions as to whether the program being run only writes its output to stdout alone or stderr alone. So I capture both outputs by starting two threads, one for each stream, and then write the output to a common queue that can be read in real time. When end-of-stream in encountered on each stdout and stderr, the threads write a special None record to the queue to indicate end of stream. So the reader of the queue know that after seeing two such "end of stream" indicators that there will be no more lines being written to the queue and that the process has effectively ended.
The call to subprocess.Popen can be made with argument shell=True so that this can also built-in shell commands and also to make the specification of the command easier (it can now be a single string rather than a list of strings).
The function run_cmd returns the created process and the queue. You just have to now loop reading lines from the queue until two None records are seen. Once that occurs, you can then just wait for the process to complete, which should be immediate.
If you know that the process you are starting only writes its output to stdout or stderr (or if you only want to catch one of these outputs), then you can modify the program to start only one thread and specify the subprocess.PIPE value for only one of these outputs and then the loop that is reading lines from the queue should only be looking for one None end-of-stream indicator.
The threads are daemon threads so that if you wish to terminate based on output from the process that has been read before all the end-of-stream records have been detected, then the threads will automatically be terminated along with the main process.
run_apache, which runs Apache as a subprocess, is itself a daemon thread. If it detects any output from Apache, it sets an event that has been passed to it. The main thread that starts run_apache can periodically test this event, wait on this event, wait for the run_apache thread to end (which will only occur when Apache ends) or can terminate Apache via global variable proc.
import subprocess
import sys
import threading
import queue
def read_stream(f, q):
for line in iter(f.readline, ''):
q.put(line)
q.put(None) # show no more data from stdout or stderr
def run_cmd(command, run_in_shell=True):
"""
Run command as a subprocess. If run_in_shell is True, then
command is a string, else it is a list of strings.
"""
proc = subprocess.Popen(command, shell=run_in_shell, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
q = queue.Queue()
threading.Thread(target=read_stream, args=(proc.stdout, q), daemon=True).start()
threading.Thread(target=read_stream, args=(proc.stderr, q), daemon=True).start()
return proc, q
import os
def run_apache(event):
global proc
os.chdir('c:\\apache\\bin')
proc, q = run_cmd(['httpd.exe'], False)
seen_None_count = 0
while seen_None_count < 2:
line = q.get()
if line is None:
# end of stream from either stdout or stderr
seen_None_count += 1
else:
event.set() # Seen output line:
print(line, end='')
# wait for process to terminate, which should be immediate:
proc.wait()
# This event will be set if Apache write output:
event = threading.Event()
t = threading.Thread(target=run_apache, args=(event,), daemon=True)
t.start()
# Main thread runs and can test event any time to see if it has done any output:
if event.is_set():
...
# The main thread can wait for run_apache thread to normally terminate,
# will occur when Apache terminates:
t.join()
# or the main thread can kill Apache via global variable procL
proc.terminate() # No need to do t.join() since run_apache is a daemon thread

Python subprocess hangs on interaction

I am trying to write a minecraft server wrapper that allows me to send it commands and receive output. Eventually, I'll attach a socket interface so that I can control my home server remotely to restart / second commands / etc.
To this end, I am attempting to use the python subprocess module to start the server, then send commands and receive the output of the server. Right now, I am running into an issue I can grab the output of the server and reflect it to screen, but the very first command I send to the process freezes the whole thing and I have to kill it. It should be noted that I have attempted to remove the process.communicate line and instead replaced it with a print(command). This also froze the process My very basic current code is as follows:
from subprocess import Popen, PIPE
from threading import Thread
import threading
def listen(process):
while process.poll() is None:
output = process.stdout.readline()
print(str(output))
def talk(process):
command = input("Enter command: ")
while command != "exit_wrapper":
#freezes on first send command
parse_command(process, command)
command = input("Enter command: ")
print("EXITTING! KILLING SERVER!")
process.kill()
def parse_command(process, command):
process.communicate(command.encode())
def main():
process = Popen("C:\\Minecraft Servers\\ServerStart.bat", cwd = "C:\\Minecraft Servers\\", stdout=PIPE, stdin=PIPE)
listener = Thread(None, listen, None, kwargs={'process':process})
listener.start()
talker = Thread(None, talk, None, kwargs={'process':process})
talker.start()
listener.join()
talker.join()
if __name__ == "__main__":
main()
Any help offered would be greatly appreciated!
subprocess.Popen.communicate() documentation clearly states:
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
And in your case it's doing exactly that. What you want to do instead of waiting for the process to be terminated is to interact with the process so, much like you're reading from the STDOUT stream directly in your listen() function, you should write to the process STDIN in order to send it commands. Something like:
def talk(process):
command = input("Enter command: ")
while command != "exit_wrapper" and process.poll() is None:
process.stdin.write(command) # write the command to the process STDIN
process.stdin.flush() # flush it
command = input("Enter command: ") # get the next command from user
if process.poll() is None:
print("EXITING! KILLING SERVER!")
process.kill()
The problem with this approach, however, is that you'll have a potential of overwriting the server's output with your Enter command: prompt and that the user will end up typing the command over the server's output instead in the 'prompt' you've designated.
What you might want to do instead is to parse your server's output in the listen() function and then based on the collected output determine when the wrapped server expects user input and then, and only then, call the talk() function (of course, remove the while loop from it) to obtain user input.
You should also pipe-out STDERR as well, in case the Minecraft server is trying to tell you something over it.

How could I know subprocess is waiting for input on GUI

I am writing a top-script (python) to control the EDA tools on IC design flow, the top-script have a GUI (python-tkinter), run EDA tools with subprocess.Popen(), print the stdout on the GUI.
Sometime EDA tool will not exit but waiting for input, then the GUI will hang and wait for my input, but top-script cannot catch the right stdout message from subprocess.stdout, and cannot put my input into subprocess.
Below is part of my top-script.
class runBlocks():
... ...
def subprocessRun(self):
# run EDA tool "runCommand" with subprocess.Popen.
self.RC = subprocess.Popen(runCommand, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# save subprocess stdout into queue.
self.queue = queue.Queue()
getRunProcessThread = threading.Thread(target=self.getRunProcess)
getRunProcessThread.setDaemon(True)
getRunProcessThread.start()
# update GUI with the stdout (on queue).
self.updateRunProcess()
# wait for subprocess finish.
(stdout, stderr) = self.RC.communicate()
return(self.RC.returncode)
def getRunProcess(self):
# no block getting subprocess.stdout
while self.RC.poll() == None:
stdoutLine = str(self.RC.stdout.readline(), encoding='utf-8')
self.queue.put(stdoutLine)
def updateRunProcess(self):
# if there is still something from subprocess.stdout, print them into GUI Text.
while self.RC.poll() == None or not self.queue.empty():
if self.queue.empty():
time.sleep(0.1)
conitnue
else:
line = self.queue.get_nowait()
if line:
# "self.runProcessText" is a Text on GUI, accept EDA tool output message.
self.runProcessText.insert(END, line)
# "self.runProcessText" is on "self.frame6", after inserting new message, update the frame.
self.frame6.update()
self.runProcessText.see(END)
If I run the EDA tool on terminal directly, it will stop and waiting for my input.
$ dc_shell-t -64 -topo -f ./analyze.run.tcl -o analyze.log
...
$ #quit
$ dc_shell-topo>
If I run the EDA tool with my top-script, the subprocess.stdout will stop on message "#quit", I cannot get message "dc_shell-topo>".
...
#quit
I know the subprocess is waiting for my input, but GUI will stop on the message "#quit", and hang with "time.sleep(0.1)" on the while command.
I also tried replace "time.sleep(0.1)" with "self.GUItop.after(100, self.updateRunProcess)", then stdout will go through "dc_shell-topo>" command without any input, then finish directly.
...
dc_shell=topo>
Memory usage for main task 82 Mbytes.
CPU usage for this session 6 seconds ...
Thank you...
My expected behavior is:
When command stop with "dc_shell-topo>" on subprocess, I can get the message with subprocess.stdout.
GUI will not hang when waiting for my intput.
My questions are:
Why use "time.sleep()" and "self.GUItop.after()" can affect the subprocess.stdout message?
When EDA tool is waiting for input with message "dc_shell-topo>" on subprocess, how could i get such message on subprocess.stdout()?
When using self.GUItop.after, the GUI Text before the subprocess finish (It is waiting for cpu free), but without self.GUItop.after, GUI will hang on time.sleep() command, how to solve such issue?
I thinks this is really a headache problem, I have read thousands related questions on google, but none of them can answer my question.

Write log-file on sigterm in a process terminated with p.terminate()

I terminate my subprocess with p.terminate which I opened like this:
p = Popen([sys.executable, args], stdin=PIPE, stdout=PIPE, stderr=open("error.txt", 'a'))
As you can see I redirect all errors to a textfile so I can eaily read it.
I missused this feature to print something into this file when the subprocess gets terminated:
signal.signal(signal.SIGTERM, sigterm_handler) # sets shutdown_flag
while True:
if shutdown_flag:
print("Some Useful information", file=sys.stderr)
However: The file is always empty. Is this because the pipe gets closed when terminate is called and whatever is written from the subprocess at this point is lost? Or is there any other issue I just dont see here?
In general, it is not a good idea to terminate a thread. Rather, you should ask it to end using threading.Event.
event = threading.Event()
while not event.isSet(): # use this instead of while True:
# do your stuff
# now write your file
print("Some Useful information", file=sys.stderr)
Then, instead of p.terminate(), use:
threading.Event().set()

Get realtime output from python subprocess

I'm trying to invoke a command line utility from Python. The code is as follows
import subprocess
import sys
class Executor :
def executeEXE(self,executable ) :
CREATE_NO_WINDOW = 0x08000000
process = subprocess.Popen(executable, stdout=subprocess.PIPE,
creationflags=CREATE_NO_WINDOW )
while True:
line = process.stdout.readline()
if line == '' and process.poll() != None:
break
print line
The problem with above code is I want the real-time output of above process which I'm not getting. What I'm doing wrong here.
there are 2 problems in your code:
first of all, readline() will block untill when a new line is printed out and flushed.
That means you should execute the code
while True:
...
in a new Thread and call a callback function when the output is ready.
Since the readline is waiting for a new line, you must use
print 'Hello World'
sys.stdout.flush()
everytime in your executable.
You can see some code and example on my git:
pyCommunicator
Instead, if your external tool is buffered, the only thing you can try is to use stderr as PIPE:
https://stackoverflow.com/a/11902799/2054758

Categories