Prohibit reading from /dev/tty - python

Tools like sudo read from /dev/tty to read a password.
I would like to avoid this.
The subprocess should not be able to read /dev/tty. The subprocess should fail immediately instead of waiting for input for ever.
I am using the subprocess module of Python. The subprocess should fail if it tries to read from /dev/tty.
Remember: The tool sudo is just an example. A fancy command line argument to sudo does not solve my problem. This should work for all linux command line tools.
Question: How to make any tool fail as soon as it wants to read from /dev/tty (called via the subprocess module of Python)?
Background: This is a normal linux user process, not root.

Since python3.2 Popen takes an argument start_new_session which will cause the executed process to be started detached from the current controlling terminal by calling setsid() prior to executing the child process.
So all you should need is to start the process with start_new_session=True

Related

subprocess.communicate() mysteriously hangs only when run from a script

I am invoking a Python tool called spark-ec2 from a Bash script.
As part of its work, spark-ec2 makes several calls to the system's ssh command via use of the subprocess module.
Here's an example:
s = subprocess.Popen(
ssh_command(opts) + ['-t', '-t', '-o', 'ConnectTimeout=3',
'%s#%s' % (opts.user, host), stringify_command('true')],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT # we pipe stderr through stdout to preserve output order
)
cmd_output = s.communicate()[0] # [1] is stderr, which we redirected to stdout
For some reason, spark-ec2 is hanging on that line where communicate() is called. I have no idea why.
For the record, here is an excerpt that shows how I'm invoking spark-ec2:
# excerpt from script-that-calls-spark-ec2.sh
# snipped: load AWS keys and do other setup stuff
timeout 30m spark-ec2 launch "$CLUSTER_NAME" ...
# snipped: if timeout, report and exit
What's killing me is that when I call spark-ec2 alone it works fine, and when I copy and paste commands from this Bash script and run them interactively they work fine.
It's only when I execute the whole script like this
$ ./script-that-calls-spark-ec2.sh
that spark-ec2 hangs on that communicate() step. This is driving me nuts.
What's going on?
This is one of those things that, once I figured it out, made me say "Wow..." out loud in a mixture of amazement and disgust.
In this case, spark-ec2 isn't hanging because of some deadlock related to the use of subprocess.PIPE, as might've been the case if spark-ec2 had used Popen.wait() instead of Popen.communicate().
The problem, as hinted to by the fact that spark-ec2 only hangs when the whole Bash script is invoked at once, is caused by something that behaves in subtly different ways depending on whether it's being called interactively or not.
In this case the culprit is the GNU coreutils utility timeout, and an option it offers called --foreground.
From the timeout man page:
--foreground
when not running timeout directly from a shell prompt,
allow COMMAND to read from the TTY and get TTY signals; in this
mode, children of COMMAND will not be timed out
Without this option, Python's communicate() cannot read the output of the SSH command being invoked by subprocess.Popen().
This probably has something to do with SSH allocating TTYs via the -t switches, but honestly I don't fully understand it.
What I can say, though, is that modifying the Bash script to use the --foreground option like this
timeout --foreground 30m spark-ec2 launch "$CLUSTER_NAME" ...
makes everything work as expected.
Now, if I were you, I would consider converting that Bash script into something else that won't drive you nuts...

Send input to command line prompt from Python program

I believe this to be a very simple question, but I have been failing to find a simple answer.
I am running a python program that terminates an AWS cluster (using starcluster). I am just calling a command from my python program using subprocess, something like the following.
subprocess.call('starcluster terminate cluster', shell=True)
The actual command is largely irrelevant for my question but provides some context. This command will begin terminating the cluster, but will prompt for a yes/no input before continuing, like so:
Terminate EBS cluster (y/n)?
How do I automate typing yes from within my python program as input to this prompt?
While possible to do with subprocess alone in somewhat limited way, I would go with pexpect for such interaction, e.g.:
import pexpect
child = pexpect.spawn('starcluster terminate cluster')
child.expect('Terminate EBS cluster (y/n)?')
child.sendline('y')
You can write to stdin using Popen:
from subprocess import Popen, PIPE
proc = Popen(['starcluster', 'terminate', 'cluster'], stdin=PIPE)
proc.stdin.write("y\r")
It might be simplest to check the documentation on the target program. Often a flag can be set to answer yes to all prompts, e.g., apt-get -y install.

Persistent Terminal Session in Python

I may not at all understand this correctly, but I am trying to allow a Python program to interface with a subprocess that runs commands as if on a Linux shell.
For example, I want to be able to run "cd /" and then "pwd later in the program and get "/".
I am currently trying to use subprocess.Popen and the communicate() method to send and receive data. The first command, sent with the Popen constructor, runs fine and gives proper output. But I cannot send another command via communicate(input="pwd").
My code so far:
from subprocess i
term=Popen("pwd", stdout=PIPE, stdin=PIPE)
print(flush(term.communicate()))
term.communicate(input="cd /")
print(flush(term.communicate(input="pwd")))
Is there a better way to do this? Thanks.
Also, I am running Python 3.
First of all, you need to understand that running a shell command and running a program aren't the same thing.
Let me give you an example:
>>> import subprocess
>>> subprocess.call(['/bin/echo', '$HOME'])
$HOME
0
>>> subprocess.call(['/bin/echo $HOME'], shell=True)
/home/kkinder
0
Notice that without the shell=True parameter, the text of $HOME is not expanded. That's because the /bin/echo program doesn't parse $HOME, Bash does. What's really happening in the second call is something analogous to this:
>>> subprocess.call(['/bin/bash', '-c', '/bin/echo $HOME'])
/home/kkinder
0
Using the shell=True parameter basically says to the subprocess module, go interpret this text using a shell.
So, you could add shell=True, but then the problem is that once the command finishes, its state is lost. Each application in the stack has its own working directory. So what the directory is will be something like this:
bash - /foo/bar
python - /foo
bash via subprocess - /
After your command executes, the python process's path stays the same and the subprocess's path is discarded once the shell finishes your command.
Basically, what you're asking for isn't practical. What you would need to do is, open a pipe to Bash, interactively feed it commands your user types, then read the output in a non-blocking way. That's going to involve a complicated pipe, threads, etc. Are you sure there's not a better way?

Python: interacting with STDIN/OUT of a running process in *nix

Is there any way of attaching a console's STDIN/STDOUT to an already running process?
Use Case:
I have a python script which runs another python script on the command line using popen.
Let's say foo.py runs popen to run python bar.py.
Then bar.py blocks on input. I can get the PID of python bar.py. Is there any way to attach a new console to the running python instance in order to be able to work interactively with it? This is specifically useful because I want to run pdb inside of bar.py.
No way. But you can modify the way you start bar.py in order to be prepared to take over stdin and stdout.
A simple method would be to just create named pipes and supply these as stdin/stdout in the Popen call. You may then connect to these pipes from you shell (exec <pipe1 >pipe2) and interact. This has the disadvantage that you must connect to the pipes to see what the process is doing. While you can work around that by using tee on a log file, depending on the terminal capability demands of bar.py this kind of interaction may not be the greatest pleasure.
A better way could be to incorporate a terminal multiplexer like GNU screen or tmux into you process tree. These tools can create a virtual terminal in which the application is run. You may then at any time attach and detach any other terminal to this terminal buffer. In your specific case, foo.py would run screen or tmux, which will run python bar.py in a complete (VT100) terminal emulation. Maybe this will solve your problem.
You cannot redirect stdin or stdout for a running process. You can, however, add code to your caller -- foo.py -- that will read from foo.py's stdin and send it to bar.py's stdout, and vice-versa.
In this model, foo.py would connect bar.py's stdin and stdout to pipes and would be responsible for shuttling data between those pipes and the real stdin/stdout.

Interacting with another command line program in Python

I need to write a Python script that can run another command line program and interact with it's stdin and stdout streams. Essentially, the Python script will read from the target command line program, intelligently respond by writing to its stdin, and then read the results from the program again. (It would do this repeatedly.)
I've looked through the subprocess module, and I can't seem to get it to do this read/write/read/write thing that I'm looking for. Is there something else I should be trying?
To perform such detailed interaction (when, outside of your control, the other program may be buffering its output unless it thinks it's talking to a terminal) needs something like pexpect -- which in turns requires pty, a Python standard library module that (on operating systems that allow it, such as Linux and Mac OS x) implements "pseudo-terminals".
Life is harder on Windows, but maybe this zipfile can help -- it's supposed to be a port of pexpect to Windows (sorry, I have no Windows machine to check it on). The project in question, called wexpect, lives here.
see the question
wxPython: how to create a bash shell window?
there I have given a full fledged interaction with bash shell
reading stdout and stderr and communicating via stdin
main part is extension of this code
bp = Popen('bash', shell=False, stdout=PIPE, stdin=PIPE, stderr=PIPE)
bp.stdin.write("ls\n")
bp.stdout.readline()
if we read all data it will get blocked so the link to script I have given does it in a thread. That is a complete wxpython app mimicking bash shell partially.

Categories