GIT hook -> Python -> Bash: How to read user input? - python

I'm doing a GIT hook in Python 3.5. The python script calls a Bash script that that reads input from the user using read command.
The bash script by itself works, also when calling directly the python script, but when GIT runs the hook written in Python, it doesn't work as expected because no user input is requested from the user.
Bash script:
#!/usr/bin/env bash
echo -n "Question? [Y/n]: "
read REPLY
GIT Hook (Python script):
#!/usr/bin/env python3
from subprocess import Popen, PIPE
proc = Popen('/path/to/myscript.sh', shell=True, stderr=PIPE, stdout=PIPE)
stdout_raw, stderr_raw= proc.communicate()
When I execute the Python script, Bash's read does not seem to be waiting for an input, and I only get:
b'\nQuestion? [Y/n]: \n'
How to let the bash script read input when being called from Python?

It turns out the problem had nothing to do with Python: if the GIT hook called a bash script it also failed to ask for input.
The solution I found is given here.
Basically, the solution is to add the following to the bash script before the read:
# Allows us to read user input below, assigns stdin to keyboard
exec < /dev/tty
In my case, I also had to call the bash process simply like Popen(mybashscript) instead of Popen(mybashscript, shell=True, stderr=PIPE, stdout=PIPE)), so the script can freely output to STDOUT and not get captured in a PIPE.
Alternatively, I didn't modify the bash script and instead used in Python:
sys.stdin = open("/dev/tty", "r")
proc = Popen(h, stdin=sys.stdin)
which is also suggested in the comments of the aforementioned link.

Adding
print(stdout_raw)
print(stderr_raw)
Shows
b''
b'/bin/sh: myscript.sh: command not found\n'
here. Adding ./ to the myscript.sh worked for the READ once python could find the script. cwd='.' in Popen may also work.

This is what worked for me without invoking a bash script from within python. It is a modified version from arod's answer.
import subprocess
import sys
sys.stdin = open("/dev/tty", "r")
user_input = subprocess.check_output("read -p \"Please give your input: \" userinput && echo \"$userinput\"", shell=True, stdin=sys.stdin).rstrip()
print(user_input)

Based on the above replies:
import sys
import subprocess
def getInput(prompt):
sys.stdin = open("/dev/tty", "r")
command = f"read -p \"{prompt}\" ret && echo \"$ret\""
userInput = subprocess.check_output(command, shell=True, stdin=sys.stdin).rstrip().decode("utf-8")
return userInput

Related

How to add environment variables to the bash opened by subprocess module?

I need to use the wget in a Python script with the subprocess.call function, but it seems the "wget" command cannot be identified by the bash subprocess opened by python.
I have added the environment variable (the path where wget is):
export PATH=/usr/local/bin:$PATH
to the ~/.bashrc file and the ~/.bash_profile file on my mac and guaranteed to have sourced them.
And the python script looks like:
import subprocess as sp
cmd = 'wget'
process = sp.Popen(cmd ,stdout=sp.PIPE, stdin=sp.PIPE,
stderr=sp.PIPE, shell=True ,executable='/bin/bash')
(stdoutdata, stderrdata) = process.communicate()
print stdoutdata, stderrdata
The expected output should be like
wget: missing URL
Usage: wget [OPTION]... [URL]...
But the result is always
/bin/bash: wget: command not found
Interestingly I can get the help output if I type in wget directly in a bash terminal, but it never works in the python script. How could it be?
PS:
If I change the command to
cmd = '/usr/local/bin/wget'
then it works. So I am sure I got wget installed.
You can pass an env= argument to the subprocess functions.
import os
myenv = os.environ.copy
myenv['PATH'] = '/usr/local/bin:' + myenv['PATH']
subprocess.run(..., env=myenv)
However, you probably want to avoid running a shell at all, and instead augment the PATH that Python uses to find the binary to run in the subprocess call.
import subprocess as sp
import os
os.environ['PATH'] = '/usr/local/bin:' + os.environ['PATH']
cmd = 'wget'
# use run instead of Popen
# don't needlessly use a shell
# and thus put [cmd] as a list
process = sp.run([cmd], stdout=sp.PIPE, stdin=sp.PIPE,
stderr=sp.PIPE,
universal_newlines=True)
print(process.stdout, process.stderr)
Running Bash commands in Python explains the changes I made in more detail.
However, there is no good reason to use an external utility for this; Python requests does pretty everything wget does, often more naturally and with more control over what exactly it does.

Python: get stdout from background subprocess

i'm trying to get informations of a network interface on a linux machine with a python script, i.e. 'ifconfig -a eht0'. So i'm using the following code:
import subprocess
proc = subprocess.Popen('ifconfig -a eth0', shell=True, stdout=subprocess.PIPE)
proc.wait()
output = proc.communicate()[0]
Well if I execute the script from terminal with
python myScript.py
or with
python myScript.py &
it works fine, but when it is run from background (launched by crontab) without an active shell, i cannot get the output.
Any idea ?
Thanks
Have you tried to used "screen"?
proc = subprocess.Popen('screen ifconfig -a eth0', shell=True, stdout=subprocess.PIPE)
I'm not sure that it can work or not.
Try proc.stdout.readline() instead of communicate, also stderr=subprocess.STDOUT in subprocess.Popen() might help. Please post the results.
I found a solution to the problem, i guess that the system is not able to recognize the function ifconfig when executed by the crontab. So adding the full path to the subprocess allows the script to be executed properly:
`proc = subprocess.Popen('/sbin/ifconfig -a eth0',shell=True,stdout=subprocess.PIPE)
proc.wait()
output = proc.communicate()[0]`
and now i can manage the output string.
Thanks

Running bash in subprocess breaks stdout of tty if interrupted while waiting on `read -s`?

As #Bakuriu points out in the comments this is basically the same problem as in BASH: Ctrl+C during input breaks current terminal However, I can only reproduce the problem when bash is run as a subprocess of another executable, and not directly from bash, where it seems to handle terminal cleanup fine. I would be interested in an answer as to why bash seems to be broken in this regard.
I have a Python script meant to log the output of subprocess that is started by that script. If the subprocess happens to be a bash script that at some point reads user input by calling the read -s built-in (the -s, which prevents echoing of entered characters, being key), and the user interrupts the script (i.e. by Ctrl-C), then bash fails to restore output to the tty, even though it continues to accept input.
I whittled this down to a simple example:
$ cat test.py
#!/usr/bin/python
import subprocess as sp
p = sp.Popen(['bash', '-c', 'read -s foo; echo $foo'])
p.wait()
Upon running ./test.py it will wait for some input. If you type some input and press Enter the script returns and echos your input as expected, and there is no issue. However, if you immediately hit "Ctrl-C", Python displayed a traceback for the KeyboardInterrupt, and then returns to the bash prompt. However, nothing you type is displayed to the terminal. Typing reset<enter> successfully resets the terminal, however.
I'm somewhat at a loss as to exactly what's happening here.
Update: I managed to reproduce this without Python in the mix either. I was trying to run bash in strace to see if I could glean anything that was going on. With the following bash script:
$ cat read.sh
#!/bin/bash
read -s foo
echo $foo
Running strace ./read.sh and immediately hitting Ctrl-C produces:
...
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon -echo ...}) = 0
brk(0x1a93000) = 0x1a93000
read(0, Process 25487 detached
<detached ...>
Where PID 25487 was read.sh. This leaves the terminal in the same broken state. However, strace -I1 ./read.sh simply interrupts the ./read.sh process and returns to a normal, non-broken terminal.
It seems like this is related to the fact that bash -c starts a non-interactive shell. This probably prevents it from restoring the terminal state.
To explicitly start an interactive shell you can just pass the -i option to bash.
$ cat test_read.py
#!/usr/bin/python3
from subprocess import Popen
p = Popen(['bash', '-c', 'read -s foo; echo $foo'])
p.wait()
$ diff test_read.py test_read_i.py
3c3
< p = Popen(['bash', '-c', 'read -s foo; echo $foo'])
---
> p = Popen(['bash', '-ic', 'read -s foo; echo $foo'])
When I run and press Ctrl+C:
$ ./test_read.py
I obtain:
Traceback (most recent call last):
File "./test_read.py", line 4, in <module>
p.wait()
File "/usr/lib/python3.5/subprocess.py", line 1648, in wait
(pid, sts) = self._try_wait(0)
File "/usr/lib/python3.5/subprocess.py", line 1598, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt
and the terminal isn't properly restored.
If I run the test_read_i.py file in the same way I just get:
$ ./test_read_i.py
$ echo hi
hi
no error, and terminal works.
As I wrote in a comment on my question, when read -s is run, bash saves the current tty attributes, and installs an add_unwind_protect handler to restore the previous tty attributes when the stack frame for read exits.
Normally, bash installs a handler for SIGINT at its startup which, among other things, invokes a full unwinding of the stack, including running all unwind_protect handlers, such as the one added by read. However, this SIGINT handler is normally only installed if bash is running in interactive mode. According to the source code, interactive mode is enabled only in the following conditions:
/* First, let the outside world know about our interactive status.
A shell is interactive if the `-i' flag was given, or if all of
the following conditions are met:
no -c command
no arguments remaining or the -s flag given
standard input is a terminal
standard error is a terminal
Refer to Posix.2, the description of the `sh' utility. */
I think this should also explain why I couldn't reproduce the problem simply by running bash from within bash. But when I run it in strace, or a subprocess started from Python, I was either using -c, or the program's stderr is not a terminal, etc.
As #Baikuriu found in their answer, posted just as I was in the process of writing this, -i will force bash to use "interactive mode", and it will clean up properly after itself.
For my part, I think this is a bug. It is documented in the man page that if stdin is not a TTY, the -s option to read is ignored. But in my example stdin is still a TTY, but bash is not otherwise technically in interactive mode, despite still invoking interactive behavior. It should still clean up properly from a SIGINT in this case.
For what it's worth, here's a Python-specific (but easily generalizeable) workaround. First I make sure that SIGINT (and SIGTERM for good measure) are passed to the subprocess. Then I wrap the whole subprocess.Popen call in a little context manager for the terminal settings:
import contextlib
import os
import signal
import subprocess as sp
import sys
import termios
#contextlib.contextmanager
def restore_tty(fd=sys.stdin.fileno()):
if os.isatty(fd):
save_tty_attr = termios.tcgetattr(fd)
yield
termios.tcsetattr(fd, termios.TCSAFLUSH, save_tty_attr)
else:
yield
#contextlib.contextmanager
def send_signals(proc, *sigs):
def handle_signal(signum, frame):
try:
proc.send_signal(signum)
except OSError:
# process has already exited, most likely
pass
prev_handlers = []
for sig in sigs:
prev_handlers.append(signal.signal(sig, handle_signal))
yield
for sig, handler in zip(sigs, prev_handlers):
signal.signal(sig, handler)
with restore_tty():
p = sp.Popen(['bash', '-c', 'read -s test; echo $test'])
with send_signals(p, signal.SIGINT, signal.SIGTERM):
p.wait()
I'd still be interested in an answer that explains why this is necessary at all though--why can't bash clean itself up better?

Assign shell script output to python variable ignoring error messages

I have a python script that I am using to call a bash script that renames a file. I then need the new name of the file so python can do some further processing on it. I'm using subprocess.Popen to call the shell script. The shell script echos the new file name so I can use stdout=subprocess.PIPE to get the new file name.
The problem is that sometimes the bash script tries to rename the file with it's old name depending on the circumstances and so gives the message that the two files are the same from the mv command. I have cutout all the other stuff and included a basic example below.
$ ls -1
test.sh
test.txt
This shell script is just an example to force the error message.
$ cat test.sh
#!/bin/bash
mv "test.txt" "test.txt"
echo "test"
In python:
$ python
>>> import subprocess
>>> p = subprocess.Popen(['/bin/bash', '-c', './test.sh'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
>>> p.stdout.read()
"mv: `test.txt' and `test.txt' are the same file\ntest\n"
How can I ignore the message from the mv command and only get the output of the echo command? If all goes well the only output of the shell script would be the result of the echo so really I just need to ignore the mv error message.
Thanks,
Geraint
Direct stderr to null, Thusly
$ python
>>> import os
>>> from subprocess import *
>>> p = Popen(['/bin/bash', '-c', './test.sh'], stdout=PIPE, stderr=open(os.devnull, 'w'))
>>> p.stdout.read()
To get subprocess' output and ignore its error messages:
#!/usr/bin/env python
from subprocess import check_output
import os
with open(os.devnull, 'wb', 0) as DEVNULL:
output = check_output("./test.sh", stderr=DEVNULL)
check_output() raises an exception if the script returns with non-zero status.
See How to hide output of subprocess in Python 2.7.

Running a bash script from Python

I need to run a bash script from Python. I got it to work as follows:
import os
os.system("xterm -hold -e scipt.sh")
That isn't exactly what I am doing but pretty much the idea. That works fine, a new terminal window opens and I hold it for debugging purposes, but my problem is I need the python script to keep running even if that isn't finished. Any way I can do this?
I recommend you use subprocess module: docs
And you can
import subprocess
cmd = "xterm -hold -e scipt.sh"
# no block, it start a sub process.
p = subprocess.Popen(cmd , shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# and you can block util the cmd execute finish
p.wait()
# or stdout, stderr = p.communicate()
For more info, read the docs,:).
edited misspellings

Categories