subprocess.communicate() mysteriously hangs only when run from a script - python

I am invoking a Python tool called spark-ec2 from a Bash script.
As part of its work, spark-ec2 makes several calls to the system's ssh command via use of the subprocess module.
Here's an example:
s = subprocess.Popen(
ssh_command(opts) + ['-t', '-t', '-o', 'ConnectTimeout=3',
'%s#%s' % (opts.user, host), stringify_command('true')],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT # we pipe stderr through stdout to preserve output order
)
cmd_output = s.communicate()[0] # [1] is stderr, which we redirected to stdout
For some reason, spark-ec2 is hanging on that line where communicate() is called. I have no idea why.
For the record, here is an excerpt that shows how I'm invoking spark-ec2:
# excerpt from script-that-calls-spark-ec2.sh
# snipped: load AWS keys and do other setup stuff
timeout 30m spark-ec2 launch "$CLUSTER_NAME" ...
# snipped: if timeout, report and exit
What's killing me is that when I call spark-ec2 alone it works fine, and when I copy and paste commands from this Bash script and run them interactively they work fine.
It's only when I execute the whole script like this
$ ./script-that-calls-spark-ec2.sh
that spark-ec2 hangs on that communicate() step. This is driving me nuts.
What's going on?

This is one of those things that, once I figured it out, made me say "Wow..." out loud in a mixture of amazement and disgust.
In this case, spark-ec2 isn't hanging because of some deadlock related to the use of subprocess.PIPE, as might've been the case if spark-ec2 had used Popen.wait() instead of Popen.communicate().
The problem, as hinted to by the fact that spark-ec2 only hangs when the whole Bash script is invoked at once, is caused by something that behaves in subtly different ways depending on whether it's being called interactively or not.
In this case the culprit is the GNU coreutils utility timeout, and an option it offers called --foreground.
From the timeout man page:
--foreground
when not running timeout directly from a shell prompt,
allow COMMAND to read from the TTY and get TTY signals; in this
mode, children of COMMAND will not be timed out
Without this option, Python's communicate() cannot read the output of the SSH command being invoked by subprocess.Popen().
This probably has something to do with SSH allocating TTYs via the -t switches, but honestly I don't fully understand it.
What I can say, though, is that modifying the Bash script to use the --foreground option like this
timeout --foreground 30m spark-ec2 launch "$CLUSTER_NAME" ...
makes everything work as expected.
Now, if I were you, I would consider converting that Bash script into something else that won't drive you nuts...

Related

Prohibit reading from /dev/tty

Tools like sudo read from /dev/tty to read a password.
I would like to avoid this.
The subprocess should not be able to read /dev/tty. The subprocess should fail immediately instead of waiting for input for ever.
I am using the subprocess module of Python. The subprocess should fail if it tries to read from /dev/tty.
Remember: The tool sudo is just an example. A fancy command line argument to sudo does not solve my problem. This should work for all linux command line tools.
Question: How to make any tool fail as soon as it wants to read from /dev/tty (called via the subprocess module of Python)?
Background: This is a normal linux user process, not root.
Since python3.2 Popen takes an argument start_new_session which will cause the executed process to be started detached from the current controlling terminal by calling setsid() prior to executing the child process.
So all you should need is to start the process with start_new_session=True

Toggle process with Python-script (kill when running, start when not running)

I'm currently running an OpenELEC (XBMC) installation on a Raspberry Pi and installed a tool named "Hyperion" which takes care of the connected Ambilight. I'm a total noob when it comes to Python-programming, so here's my question:
How can I run a script that checks if a process with a specific string in its name is running and:
kill the process when it's running
start the process when it's not running
The goal of this is to have one script that toggles the Ambilight. Any idea how to achieve this?
You may want to have a look at the subprocess module which can run shell commands from Python. For instance, have a look at this answer. You can then get the stdout from the shell command to a variable. I suspect you are going to need the pidof shell command.
The basic idea would be along the lines of:
import subprocess
try:
subprocess.check_output(["pidof", "-s", "-x", "hyperiond"])
except subprocess.CalledProcessError:
# spawn the process using a shell command with subprocess.Popen
subprocess.Popen("hyperiond")
else:
# kill the process using a shell command with subprocess.call
subprocess.call("kill %s" % output)
I've tested this code in Ubuntu with bash as the process and it works as expected. In your comments you note that you are getting file not found errors. You can try putting the complete path to pidof in your check_output call. This can be found using which pidof from the terminal. The code for my system would then become
subprocess.check_output(["/bin/pidof", "-s", "-x", "hyperiond"])
Your path may differ. On windows adding shell=True to the check_output arguments fixes this issue but I don't think this is relevant for Linux.
Thanks so much for your help #will-hart, I finally got it working. Needed to change some details because the script kept saying that "output" is not defined. Here's how it now looks like:
#!/usr/bin/env python
import subprocess
from subprocess import call
try:
subprocess.check_output(["pidof", "hyperiond"])
except subprocess.CalledProcessError:
subprocess.Popen(["/storage/hyperion/bin/hyperiond.sh", "/storage/.config/hyperion.config.json"])
else:
subprocess.call(["killall", "hyperiond"])

Issue terminal commands that are piped to a shell script

I have what seems to be a simple use case: I launch a script (python or bash) which runs an emulator from command prompt and then the emulator takes commands until I type ctrl-c or exit. I want to do this same thing from a shell and my code below isn't working. What I am trying to do is test automation so I want to issue commands directly to the application from command shell. In python, I have the following:
import os
import subprocess
command = ['/usr/local/bin/YCTV-SIM.sh', '-Latest'] #emulator for yahoo widgets
process = subprocess.Popen( command, shell=True, stdin=subprocess.PIPE )
time.sleep(12) #wait for launch to finish
print '/widgets 1' #first command to issue
print '/key enter' #second command to issue
process.wait()
As you can see, this is some pretty simple stuff. When 'YCTV-SIM.sh' is launched from the command shell, I am put into an input mode and my key entries are sent to the application shell (YCTV-SIM.sh reads raw input) so ideally, I would be able to pipe text directly to this application shell. So far tho, nothing happens; test outputs to the console window but the application does not respond to the commands that I attempt to issue. I am using python 2.6.3, if that matters, but Python is not required..
Language is immaterial at this point so PERL, Python, Bash, TCL... whatever you can suggest that might help.
You need to redirect stdin of the child process and write into it. See e.g. subprocess.Popen.communicate.

Persistent Terminal Session in Python

I may not at all understand this correctly, but I am trying to allow a Python program to interface with a subprocess that runs commands as if on a Linux shell.
For example, I want to be able to run "cd /" and then "pwd later in the program and get "/".
I am currently trying to use subprocess.Popen and the communicate() method to send and receive data. The first command, sent with the Popen constructor, runs fine and gives proper output. But I cannot send another command via communicate(input="pwd").
My code so far:
from subprocess i
term=Popen("pwd", stdout=PIPE, stdin=PIPE)
print(flush(term.communicate()))
term.communicate(input="cd /")
print(flush(term.communicate(input="pwd")))
Is there a better way to do this? Thanks.
Also, I am running Python 3.
First of all, you need to understand that running a shell command and running a program aren't the same thing.
Let me give you an example:
>>> import subprocess
>>> subprocess.call(['/bin/echo', '$HOME'])
$HOME
0
>>> subprocess.call(['/bin/echo $HOME'], shell=True)
/home/kkinder
0
Notice that without the shell=True parameter, the text of $HOME is not expanded. That's because the /bin/echo program doesn't parse $HOME, Bash does. What's really happening in the second call is something analogous to this:
>>> subprocess.call(['/bin/bash', '-c', '/bin/echo $HOME'])
/home/kkinder
0
Using the shell=True parameter basically says to the subprocess module, go interpret this text using a shell.
So, you could add shell=True, but then the problem is that once the command finishes, its state is lost. Each application in the stack has its own working directory. So what the directory is will be something like this:
bash - /foo/bar
python - /foo
bash via subprocess - /
After your command executes, the python process's path stays the same and the subprocess's path is discarded once the shell finishes your command.
Basically, what you're asking for isn't practical. What you would need to do is, open a pipe to Bash, interactively feed it commands your user types, then read the output in a non-blocking way. That's going to involve a complicated pipe, threads, etc. Are you sure there's not a better way?

I am unable to interact wih subprocess created by Popen

I am using python 2.5 in windows xp.
In this i am using subprocess to run my shell,
now how should i has to run
gdb in shell using subprocess.
my code:
PID = subprocess.Popen('C:/STM/STxP70_Toolset_2010.2/bin/STxP70.bat', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE).
Now shell will open, next
if i try to run gdb using communicate by
PID.communicate ("gdb"),
"gdb" is not running in the shell.
What should i has to do for this.
Your code:
Starts STxP70.bat
Writes string "gdb" (with no terminating newline) to it's standard input and closes the standard input.
Is reading it's output until end of file. PID.communicate won't let you to interact with the subprocess any further—it writes the provided string and than collects all output until the process terminates.
When STxP70.bat completes, the subprocess terminates.
Note, that if "shell will open" means a new window comes up with a shell prompt in it, you are screwed. It would mean the STxP70.bat stared it with 'start' command and you can't communicate with that, because it's not inheriting your stdin/stdout/stderr pipes. You would have to create your own modification of the batch that will not use 'start'.
.

Categories