Using Python sh module, how to not suppress the interactive vim command? - python

import sh
sh.vim("lalala")
does not show the vim editor in my console. Setting _bg=False kwarg makes no change (since that's already the default value)
If instead I use the subprocess module, it works:
import subprocess
subprocess.call(["vim", "lalala"])

The problem is that vim expects its stdin to be a TTY, but the pipe created by sh is not a TTY, it's a pipe.
The solution is to not try to intercept vim's standard I/O with pipes. Since intercepting stdio with pipes is the entire purpose of sh, rather than trying to find a way to fight against it, you're better off not using it. Just use the stdlib's subprocess module, which only intercepts stdio if you go out of your way to ask it to:
subprocess.check_call(['vim', 'lalala'])
But notice the TTYs section in the sh docs:
Some applications behave differently depending on whether their standard file descriptors are attached to a TTY or not. For example, git will disable features intended for humans such as colored and paged output when STDOUT is not attached to a TTY. Other programs may disable interactive input if a TTY is not attached to STDIN. Still other programs, such as SSH (without -n), expect their input to come from a TTY/terminal.
By default, sh emulates a TTY for STDOUT but not for STDIN. You can change the default behavior by passing in extra special keyword arguments…
So, if you pass _tty_in=True, then vim's input will be an emulated TTY instead of a pipe.
But that still isn't going to do much good. It'll allow vim to run, but it'll run using the fake TTY created by sh for its input and output, which I'm pretty sure is not what you want. (If you were looking to send it control sequences and capture and process the control sequences it sends back, it would almost certainly be simpler to just script ed—or, better, sed—instead…)
So why aren't you getting some kind of error message or other sane behavior?
Really, that's down to vim. If you try the same thing with emacs, or any app that uses curses, and many other TTY apps, they'll write an error message to stderr and exit with 1, so you'll see something like this:
ErrorReturnCode_1:
RAN: '/usr/bin/emacs -nw'
STDOUT:
STDERR:
emacs: standard input is not a tty

Related

missing stdout before subprocess.Popen crash [duplicate]

I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.

twisted reactor.spawnProcess get stdout w/o bufffering on windows

I'm running an external process and I need to get the stdout immediately so I can push it to a textview, on GNU/Linux I can use "usePTY=True" to get the stdout by line, unfortunately usePTY is not available on windows.
I'm fairly new to twisted, is there a way to achieve the same result on Windows with some twisted (or python maybe) magic stuff?
on GNU/Linux I can use "usePTY=True" to get the stdout by line
Sort of! What usePTY=True actually does is create a PTY (a "pseudo-terminal" - the thing you always get when you log in to a shell on GNU/Linux unless you have a real terminal which no one does anymore :) instead of a boring old pipe. A PTY is a lot like a pipe but it has some extra features - but more importantly for you, a PTY is strongly associated with interactive sessions (ie, a user) whereas a pipe is pretty strongly associated with programmatic uses (think foo | bar - no user ever sees the output of foo).
This means that people tend to use existence of a PTY as stdout as a signal that they should produce output in a timely manner - because a human is waiting to see it. On the flip side, the existence of a regular old pipe as stdout is taken as a signal that another program is consuming the output and they should instead produce output in the most efficient way possible.
What this tends to mean in practice is that if a program has a PTY then it will line buffer its output and if it has a pipe then it will "block" buffer its output (usually gather up about 4kB of data before writing any of it) - because line buffering is less efficient.
The thing to note here is that it is the program you are running that does this buffering. Whether you pass usePTY=True or usePTY=False makes no direct difference to that buffering: it is just a hint to the program you are running what kind of output buffering it should do.
This means that you might run programs that block buffer even if you pass usePTY=True and vice versa.
However... Windows doesn't have PTYs. So programs on Windows can't consider PTYs as a hint for how to buffer their output.
I don't actually know if there is another hint that it is conventional for programs to respect on Windows. I've never come across one, at least.
If you're lucky, then the program you're running will have some way for you to request line-buffered output. If you're running Python, then it does - the PYTHONUNBUFFERED environment variable controls this, as does the -u command line option (and I think they both work on Windows).
Incidentally, if you plan to pass binary data between the two processes, then you probably also want to put stdio into binary mode in the child process as well:
import os, sys, mscvrt
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)

Why does stdout not flush when connecting to a process that is run with supervisord?

I am using Supervisor (process controller written in python) to start and control my web server and associated services. I find the need at times to enter into pdb (or really ipdb) to debug when the server is running. I am having trouble doing this through Supervisor.
Supervisor allows the processes to be started and controlled with a daemon called supervisord, and offers access through a client called supervisorctl. This client allows you to attach to one of the foreground processes that has been started using a 'fg' command. Like this:
supervisor> fg webserver
All logging data gets sent to the terminal. But I do not get any text from the pdb debugger. It does accept my input so stdin seems to be working.
As part of my investigation I was able to confirm that neither print nor raw_input send and text out either; but in the case of raw_input the stdin is indeed working.
I was also able to confirm that this works:
sys.stdout.write('message')
sys.flush()
I though that when I issued the fg command that it would be as if I had run the process in the foreground in the standard terminal ... but it appears that supervisorctl is doing something more. Regular printing does not flush for example. Any ideas?
How can I get pdb, standard prints, etc to work properly when connecting to the foreground terminal using the fg command in supervisorctl?
(Possible helpful ref: http://supervisord.org/subprocess.html#nondaemonizing-of-subprocesses)
It turns out that python defaults to buffering its output stream. In certain cases (such as this one) - it results in output being detained.
Idioms like this exist:
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
to force the buffer to zero.
But the better alternative I think is to start the base python process in an unbuffered state using the -u flag. Within the supervisord.conf file it simply becomes:
command=python -u script.py
ref: http://docs.python.org/2/using/cmdline.html#envvar-PYTHONUNBUFFERED
Also note that this dirties up your log file - especially if you are using something like ipdb with ANSI coloring. But since it is a dev environment it is not likely that this matters.
If this is an issue - another solution is to stop the process to be debugged in supervisorctl and then run the process temporarily in another terminal for debugging. This would keep the logfiles clean if that is needed.
It could be that your webserver redirects its own stdout (internally) to a log file (i.e. it ignores supervisord's stdout redirection), and that prevents supervisord from controlling where its stdout goes.
To check if this is the case, you can tail -f the log, and see if the output you expected to see in your terminal goes there.
If that's the case, see if you can find a way to configure your webserver not to do that, or, if all else fails, try working with two terminals... (one for input, one for ouptut)

Getting live output from running unix command in python

I am using below code for running unix commands:
cmd = 'ls -l'
(status,output) = commands.getstatusoutput(cmd)
print output
But the problem is that it shows output only after the command completed, but i want to see the output printed as the execution progresses.
ls -l is just dummy command, i am using some complex command in actual program.
Thanks!!
Since this is homework, here's what to do instead of the full solution:
Use the subprocess.Popen class to call the executable. Note that the constructor takes a named stdout argument, and take a look at subprocess.PIPE.
Read from the Popen object's STDOUT pipe in a separate thread to avoid dead locks. See the threading module.
Wait until the subprocess has finished (see Popen.wait).
Wait until the thread has finished processing the output (see Thread.join). Note that this may very well happen after the subprocess has finished.
If you need more help please describe your precise problem.
Unless there are simpler ways in Python which I'm not aware of, I believe you'll have to dig into the slightly more complex os.fork and os.pipe functions.
Basically, the idea is to fork your process, have the child execute your command, while having its standard output redirected to a pipe which will be read by the parent. You'll easily find examples of this kind of pattern.
Most programs will use block buffered output if they are not connected to a tty, so you need to run the program connected to a pty; the easiest way is to use pexpect:
for line in pexpect.spawn('command arg1 arg2'):
print line

How to disable shell interception of control characters?

I'm writing a curses application in Python under UNIX. I want to enable the user to use C-Y to yank from a kill ring a la Emacs.
The trouble is, of course, that C-Y is caught by my shell which then sends SIGTSTP to my process. In addition, C-Z also results in SIGTSTP being sent, so catching the signal means that C-Y and C-Z are not distinguishable (though even without this the only solutions I can think of are extremely hackish).
I know what I'm asking is possible (in C if not in Python), since Emacs does it. How can I disable the shell's special handling of certain control characters sent from the keyboard and have the characters in question appear on the process' stdin?
See the termios module, and the termios(3) man page.
For basic functionality, use tty. For example, calling tty.setraw(sys.stdin) will put standard input's terminal into raw mode.
For the more general case, Python comes with a termios library, but you probably need some experience with termios to know how to use it.
Alternatively, a cheap way is to shell out to stty, which is a command-line interface to termios.

Categories