I need to write a Python script that can run another command line program and interact with it's stdin and stdout streams. Essentially, the Python script will read from the target command line program, intelligently respond by writing to its stdin, and then read the results from the program again. (It would do this repeatedly.)
I've looked through the subprocess module, and I can't seem to get it to do this read/write/read/write thing that I'm looking for. Is there something else I should be trying?
To perform such detailed interaction (when, outside of your control, the other program may be buffering its output unless it thinks it's talking to a terminal) needs something like pexpect -- which in turns requires pty, a Python standard library module that (on operating systems that allow it, such as Linux and Mac OS x) implements "pseudo-terminals".
Life is harder on Windows, but maybe this zipfile can help -- it's supposed to be a port of pexpect to Windows (sorry, I have no Windows machine to check it on). The project in question, called wexpect, lives here.
see the question
wxPython: how to create a bash shell window?
there I have given a full fledged interaction with bash shell
reading stdout and stderr and communicating via stdin
main part is extension of this code
bp = Popen('bash', shell=False, stdout=PIPE, stdin=PIPE, stderr=PIPE)
bp.stdin.write("ls\n")
bp.stdout.readline()
if we read all data it will get blocked so the link to script I have given does it in a thread. That is a complete wxpython app mimicking bash shell partially.
Related
Is there a way in Python to launch a Tcl shell or a unix shell and take control of it to perform operations? I understand that the subprocess, TkInter can be used to execute shell/Tcl scripts and get the output.
Specifically this will be a wrapper on the Tcl shell/tool that will control its stdin/out. I have tried establishing a client-server connection but in order for python to control tcl shell, the connection needs to be established manually. What should be approach here?
With Tkinter, you have a Tcl interpreter inside the same process as Python. That's fine for doing conventional stuff (or Tk) but if you've got some sort of custom Tcl interpreter to control, you'll probably want to launch that as a subprocess connected by pipes to the parent Python process.
The subprocess module in Python has most of what you'll need.
import subprocess
p = subprocess.Popen(["tclimplementation"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Turn off buffering on the Tcl side
print("fconfigure stdout -buffering none", file=p.stdin)
# Also should read from p.stdout; here's how
line = p.stdout.readline().strip()
That telling of the Tcl side to fconfigure stdout -buffering none is important, as Tcl (in common with most processes) defaults to buffering multiple kilobytes when writing to anything other than a terminal (great for performance, not great for interactive use).
Now that you can send messages back and forth, you need to think more about how to actually control the other side. That's rather application specific.
Tools like sudo read from /dev/tty to read a password.
I would like to avoid this.
The subprocess should not be able to read /dev/tty. The subprocess should fail immediately instead of waiting for input for ever.
I am using the subprocess module of Python. The subprocess should fail if it tries to read from /dev/tty.
Remember: The tool sudo is just an example. A fancy command line argument to sudo does not solve my problem. This should work for all linux command line tools.
Question: How to make any tool fail as soon as it wants to read from /dev/tty (called via the subprocess module of Python)?
Background: This is a normal linux user process, not root.
Since python3.2 Popen takes an argument start_new_session which will cause the executed process to be started detached from the current controlling terminal by calling setsid() prior to executing the child process.
So all you should need is to start the process with start_new_session=True
I have what seems to be a simple use case: I launch a script (python or bash) which runs an emulator from command prompt and then the emulator takes commands until I type ctrl-c or exit. I want to do this same thing from a shell and my code below isn't working. What I am trying to do is test automation so I want to issue commands directly to the application from command shell. In python, I have the following:
import os
import subprocess
command = ['/usr/local/bin/YCTV-SIM.sh', '-Latest'] #emulator for yahoo widgets
process = subprocess.Popen( command, shell=True, stdin=subprocess.PIPE )
time.sleep(12) #wait for launch to finish
print '/widgets 1' #first command to issue
print '/key enter' #second command to issue
process.wait()
As you can see, this is some pretty simple stuff. When 'YCTV-SIM.sh' is launched from the command shell, I am put into an input mode and my key entries are sent to the application shell (YCTV-SIM.sh reads raw input) so ideally, I would be able to pipe text directly to this application shell. So far tho, nothing happens; test outputs to the console window but the application does not respond to the commands that I attempt to issue. I am using python 2.6.3, if that matters, but Python is not required..
Language is immaterial at this point so PERL, Python, Bash, TCL... whatever you can suggest that might help.
You need to redirect stdin of the child process and write into it. See e.g. subprocess.Popen.communicate.
Is there any way of attaching a console's STDIN/STDOUT to an already running process?
Use Case:
I have a python script which runs another python script on the command line using popen.
Let's say foo.py runs popen to run python bar.py.
Then bar.py blocks on input. I can get the PID of python bar.py. Is there any way to attach a new console to the running python instance in order to be able to work interactively with it? This is specifically useful because I want to run pdb inside of bar.py.
No way. But you can modify the way you start bar.py in order to be prepared to take over stdin and stdout.
A simple method would be to just create named pipes and supply these as stdin/stdout in the Popen call. You may then connect to these pipes from you shell (exec <pipe1 >pipe2) and interact. This has the disadvantage that you must connect to the pipes to see what the process is doing. While you can work around that by using tee on a log file, depending on the terminal capability demands of bar.py this kind of interaction may not be the greatest pleasure.
A better way could be to incorporate a terminal multiplexer like GNU screen or tmux into you process tree. These tools can create a virtual terminal in which the application is run. You may then at any time attach and detach any other terminal to this terminal buffer. In your specific case, foo.py would run screen or tmux, which will run python bar.py in a complete (VT100) terminal emulation. Maybe this will solve your problem.
You cannot redirect stdin or stdout for a running process. You can, however, add code to your caller -- foo.py -- that will read from foo.py's stdin and send it to bar.py's stdout, and vice-versa.
In this model, foo.py would connect bar.py's stdin and stdout to pipes and would be responsible for shuttling data between those pipes and the real stdin/stdout.
I have successfully run several Python scripts, calling them from a base script using the subprocess module:
subprocess.popen([sys.executable, 'script.py'], shell=True)
However, each of these scripts executes some simulations (.exe files from a C++ application) that generate some output to the shell. All these outputs are written to the base shell from where I've launched those scripts. I'd like to generate a new shell for each script. I've tried to generate new shells using the shell=True attribute when calling subprocess.call (also tried with popen), but it doesn't work.
How do I get a new shell for each process generated with the subprocess.call?
I was reading the documentation about stdin and stdout as suggested by Spencer and found a flag the solved the problem: subprocess.CREATE_NEW_CONSOLE. Maybe redirecting the pipes does the job too, but this seems to be the simplest solution (at least for this specific problem). I've just tested it and worked perfectly:
subprocess.popen([sys.executable, 'script.py'], creationflags = subprocess.CREATE_NEW_CONSOLE)
To open in a different console, do (tested on Windows 7 / Python 3):
from sys import executable
from subprocess import Popen, CREATE_NEW_CONSOLE
Popen([executable, 'script.py'], creationflags=CREATE_NEW_CONSOLE)
input('Enter to exit from this launcher script...')
Popen already generates a sub process to handle things. You just need to redirect the output pipes. Look at the subprocess documentation, specifically the section on popen stdin, stdout and stderr redirection.
If you don't redirect these pipes, it inherits them from the parent. Just be careful about deadlocking your processes.
You wanted additional windows for each subprocess. This is handled as well. Look at the startupinfo section of subprocess. It explains what options to set on windows to spawn a new terminal for each subprocess. Note that it requires the use of the shell=True option.
This doesn't actually answer your question. But I've had my problems with subprocess too, and pexpect turned out to be really helpful.