I am trying to write a minecraft server wrapper that allows me to send it commands and receive output. Eventually, I'll attach a socket interface so that I can control my home server remotely to restart / second commands / etc.
To this end, I am attempting to use the python subprocess module to start the server, then send commands and receive the output of the server. Right now, I am running into an issue I can grab the output of the server and reflect it to screen, but the very first command I send to the process freezes the whole thing and I have to kill it. It should be noted that I have attempted to remove the process.communicate line and instead replaced it with a print(command). This also froze the process My very basic current code is as follows:
from subprocess import Popen, PIPE
from threading import Thread
import threading
def listen(process):
while process.poll() is None:
output = process.stdout.readline()
print(str(output))
def talk(process):
command = input("Enter command: ")
while command != "exit_wrapper":
#freezes on first send command
parse_command(process, command)
command = input("Enter command: ")
print("EXITTING! KILLING SERVER!")
process.kill()
def parse_command(process, command):
process.communicate(command.encode())
def main():
process = Popen("C:\\Minecraft Servers\\ServerStart.bat", cwd = "C:\\Minecraft Servers\\", stdout=PIPE, stdin=PIPE)
listener = Thread(None, listen, None, kwargs={'process':process})
listener.start()
talker = Thread(None, talk, None, kwargs={'process':process})
talker.start()
listener.join()
talker.join()
if __name__ == "__main__":
main()
Any help offered would be greatly appreciated!
subprocess.Popen.communicate() documentation clearly states:
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
And in your case it's doing exactly that. What you want to do instead of waiting for the process to be terminated is to interact with the process so, much like you're reading from the STDOUT stream directly in your listen() function, you should write to the process STDIN in order to send it commands. Something like:
def talk(process):
command = input("Enter command: ")
while command != "exit_wrapper" and process.poll() is None:
process.stdin.write(command) # write the command to the process STDIN
process.stdin.flush() # flush it
command = input("Enter command: ") # get the next command from user
if process.poll() is None:
print("EXITING! KILLING SERVER!")
process.kill()
The problem with this approach, however, is that you'll have a potential of overwriting the server's output with your Enter command: prompt and that the user will end up typing the command over the server's output instead in the 'prompt' you've designated.
What you might want to do instead is to parse your server's output in the listen() function and then based on the collected output determine when the wrapped server expects user input and then, and only then, call the talk() function (of course, remove the while loop from it) to obtain user input.
You should also pipe-out STDERR as well, in case the Minecraft server is trying to tell you something over it.
Related
I wish to control a long-running interactive Bash subprocess from Python's asyncio, send it commands one at a time, and receive results back from it.
The code fragment below works perfectly well in Python 3.7.0, Darwin Kernel Version 16.7.0, except that Bash prompts do not appear immediately on stderr, but appear to "queue up" until something else writes to stderr.
This is a problem because the original program needs to receive the Bash prompt to know that the previous command has finished.
from asyncio.subprocess import PIPE
import asyncio
async def run():
proc = await asyncio.create_subprocess_exec(
'/bin/bash', '-i', stdin=PIPE, stdout=PIPE, stderr=PIPE
)
async def read(stream):
message = 'E' if stream is proc.stderr else 'O'
while True:
line = await stream.readline()
if line:
print(message, line)
else:
break
async def write():
for command in (b'echo PS1=$PS1', b'ls sub.py', b'ls DOESNT-EXIST'):
proc.stdin.write(command + b'\n')
await proc.stdin.drain()
await asyncio.sleep(0.01) # TODO: need instead to wait for prompt
await asyncio.gather(
read(proc.stderr),
read(proc.stdout),
write(),
)
asyncio.run(run())
Results:
E b'bash: no job control in this shell\n'
O b'PS1=\\u#\\h:\\w$\n'
O b'sub.py\n'
E b'tom#bantam:/code/test/python$ tom#bantam:/code/test/python$ tom#bantam:/code/test/python$ ls: DOESNT-EXIST: No such file or directory\n'
Note that the three prompts all come out together at the end, and only once an error was deliberately caused. The desired behavior would of course be for the prompts to appear immediately as they occurred.
Using proc.stderr.read() instead of proc.stderr.read() results in more code but just the same results.
I'm a little surprised to see that bash: no job control in this shell message appear in stderr, because I am running bash -i and because $PS1 is set, and I wonder if that has something to do with the issue but haven't been able to take that further.
This held me up for half a day, but once I finished writing the question up, it took me ten minutes to come up with a workaround.
If I modify the prompt so it ends with a \n, then proc.stderr is in fact flushed, and everything works absolutely perfectly.
I'm trying to terminate a subprocess pid if a string is in the output, but it is not working. What is wrong?
import subprocess
import shlex
if "PING" in subprocess.check_call(shlex.split("ping -c 10 gogole.com")):
subprocess.check_call(shlex.split("ping -c 10 gogole.com")).terminate()
Please refere to the documentation for the methods you call. First of all, check_call executes until the process is finished, then returns the return code from the process. I'm not sure how you intend to find "PING" from a return code, which is typically an integer.
If it is there, look at the body of your if statement: you fork a totally new instance of ping, wait for it to complete, and then try to terminate the return code.
I recommend that you work through a tutorial on subprocesses. Learn how to grab a process handle and invoke operations on that. You'll need to get a handle on the output stream, look for "PING" in that, and then call terminate on the process handle you got at invocation.
import subprocess, os
run = "ping -c 10 google.com"
log = ""
process = subprocess.Popen(run, stdout=subprocess.PIPE, shell=True)
while True:
out = process.stdout.read(1)
log +=out
print log
if out == '' and process.poll() != None:
break
if "PING" in log:
print "terminated!"
process.kill()
process.terminate()
break
I am writing a top-script (python) to control the EDA tools on IC design flow, the top-script have a GUI (python-tkinter), run EDA tools with subprocess.Popen(), print the stdout on the GUI.
Sometime EDA tool will not exit but waiting for input, then the GUI will hang and wait for my input, but top-script cannot catch the right stdout message from subprocess.stdout, and cannot put my input into subprocess.
Below is part of my top-script.
class runBlocks():
... ...
def subprocessRun(self):
# run EDA tool "runCommand" with subprocess.Popen.
self.RC = subprocess.Popen(runCommand, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# save subprocess stdout into queue.
self.queue = queue.Queue()
getRunProcessThread = threading.Thread(target=self.getRunProcess)
getRunProcessThread.setDaemon(True)
getRunProcessThread.start()
# update GUI with the stdout (on queue).
self.updateRunProcess()
# wait for subprocess finish.
(stdout, stderr) = self.RC.communicate()
return(self.RC.returncode)
def getRunProcess(self):
# no block getting subprocess.stdout
while self.RC.poll() == None:
stdoutLine = str(self.RC.stdout.readline(), encoding='utf-8')
self.queue.put(stdoutLine)
def updateRunProcess(self):
# if there is still something from subprocess.stdout, print them into GUI Text.
while self.RC.poll() == None or not self.queue.empty():
if self.queue.empty():
time.sleep(0.1)
conitnue
else:
line = self.queue.get_nowait()
if line:
# "self.runProcessText" is a Text on GUI, accept EDA tool output message.
self.runProcessText.insert(END, line)
# "self.runProcessText" is on "self.frame6", after inserting new message, update the frame.
self.frame6.update()
self.runProcessText.see(END)
If I run the EDA tool on terminal directly, it will stop and waiting for my input.
$ dc_shell-t -64 -topo -f ./analyze.run.tcl -o analyze.log
...
$ #quit
$ dc_shell-topo>
If I run the EDA tool with my top-script, the subprocess.stdout will stop on message "#quit", I cannot get message "dc_shell-topo>".
...
#quit
I know the subprocess is waiting for my input, but GUI will stop on the message "#quit", and hang with "time.sleep(0.1)" on the while command.
I also tried replace "time.sleep(0.1)" with "self.GUItop.after(100, self.updateRunProcess)", then stdout will go through "dc_shell-topo>" command without any input, then finish directly.
...
dc_shell=topo>
Memory usage for main task 82 Mbytes.
CPU usage for this session 6 seconds ...
Thank you...
My expected behavior is:
When command stop with "dc_shell-topo>" on subprocess, I can get the message with subprocess.stdout.
GUI will not hang when waiting for my intput.
My questions are:
Why use "time.sleep()" and "self.GUItop.after()" can affect the subprocess.stdout message?
When EDA tool is waiting for input with message "dc_shell-topo>" on subprocess, how could i get such message on subprocess.stdout()?
When using self.GUItop.after, the GUI Text before the subprocess finish (It is waiting for cpu free), but without self.GUItop.after, GUI will hang on time.sleep() command, how to solve such issue?
I thinks this is really a headache problem, I have read thousands related questions on google, but none of them can answer my question.
I am working on a python program which implements the cmd window.
I am using subproccess with PIPE.
If for example i write "dir" (by stdout), I use communicate() in order to get the response from the cmd and it does work.
The problem is that in a while True loop, this doesn't work more than one time, it seems like the subprocess closes itself..
Help me please
import subprocess
process = subprocess.Popen('cmd.exe', shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=None)
x=""
while x!="x":
x = raw_input("insert a command \n")
process.stdin.write(x+"\n")
o,e=process.communicate()
print o
process.stdin.close()
The main problem is that trying to read subprocess.PIPE deadlocks when the program is still running but there is nothing to read from stdout. communicate() manually terminates the process to stop this.
A solution would be to put the piece of code that reads stdout in another thread, and then access it via Queue, which allows for reliable sharing of data between threads by timing out instead of deadlocking.
The new thread will read standard out continuously, stopping when there is no more data.
Each line will be grabbed from the queue stream until a timeout is reached(no more data in Queue), then the list of lines will be displayed to the screen.
This process will work for non-interactive programs
import subprocess
import threading
import Queue
def read_stdout(stdout, queue):
while True:
queue.put(stdout.readline()) #This hangs when there is no IO
process = subprocess.Popen('cmd.exe', shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
q = Queue.Queue()
t = threading.Thread(target=read_stdout, args=(process.stdout, q))
t.daemon = True # t stops when the main thread stops
t.start()
while True:
x = raw_input("insert a command \n")
if x == "x":
break
process.stdin.write(x + "\n")
o = []
try:
while True:
o.append(q.get(timeout=.1))
except Queue.Empty:
print ''.join(o)
I have a dtrace snippet run via python script and the dtrace snippet is such that it generates data when CTRL-C is issued to it. So I had a signal_handler defined in the python script to catch CTRL-C from user and relay this to the dtrace invocation done via subprocess.Popen but I am unable to get any output in my log file. Here is the script:
Proc = []
signal_posted = False
def signal_handler(sig, frame):
print("Got CTRL-C!")
global signal_posted
signal_posted = True
global Proc
Proc.send_signal(signal.SIGINT) #Signal posting from handler
def execute_hotkernel():
#
# Generate the .out output file
#
fileout = "hotkernel.out"
fileo = open(fileout, "w+")
global Proc
Proc = subprocess.Popen(['/usr/sbin/dtrace', '-n', dtrace_script], stdout = fileo)
while Proc.poll() is None:
time.sleep(0.5)
def main():
signal.signal(signal.SIGINT, signal_handler) # Change our signal handler
execute_hotkernel()
if __name__ == '__main__':
main()
Since I have a file hotkernel.out set in subprocess.Popen command for stdout I was expecting the output from dtrace to be redirected to hotkernel.out on doing a CTRL-C but it is empty. What is missing here?
I have a similar issue.
In my case, it's a shell script that runs until you hit Control-C, and then prints out summary information. When I run this using subprocess.Popen, whether using a PIPE or a file object for stdout, I either don't get the information (with a file object) or it hangs when I try to run stdout.readline().
I finally tried running the subprocess from the interpreter and discovered I could get the last line of output after the SIGINT with a PIPE if I call stdout.readline() (where it hangs) and hit Control-C (in the interpreter), and then call stdout.readline() again.
I do not know how to emulate this in script, for a file output or for a PIPE. I did not try the file output in the interpreter.
EDIT:
I finally got back to this and determined, it's actually pretty easy to emulate outside of python and really has nothing to do with python.
/some_cmd_that_ends_on_sigint
(enter control-c)
*data from stdout in event handler*
Works
/some_cmd_that_ends_on_sigint | tee some.log
(enter control-c)
*Nothing sent to stdout in event handler prints to the screen or the log*
Where's my log?
I ended up just adding a file stream in the event handler (in the some_cmd_that_ends_on_sigint source) that writes the data to a (possibly secondary) log. Works, if a bit awkward. You get the data on the screen if running without any piping, but I can also read it when piped or from python from the secondary log.