As a php programmer (of sorts) very new to os and command line processes, I'm surprised that within python, everything a user inputs during the course of interacting with a program seems to be buffered, waiting to pour out at the first use of raw_input (for example).
Found some code to call prior to raw_input which seems to "solve" the problem on osX, although supposedly it is providing access to windows capabilities:
class FlushInput(object):
def flush_input(self):
try:
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
except ImportError:
import sys, termios
termios.tcflush(sys.stdin, termios.TCIOFLUSH)
Am I understanding correctly that stdin and stdout, stderr methods will vary between OSs?
I imagine that maybe a framework like Django has built-in methods that simplify the interactivity, but does it basically take a few lines of code just to tell python "don't accept any input until it's invited?"
Ahhh. If I'm understanding this correctly (and I'm sure the understanding needs much refining), the answer is that yes, stdin, stdout, stderr, "Standard" input, output and error streams and their handling may vary from (operating) system to system, because they are products of the OS and NOT any particular programming language.
The expectation that "telling python to ignore stdin until input is requested" would be automatic stems from an thinking of the "terminal" as if it were a typewriter. Where the goal of a typewriter is to record strings of information in a human-readable format, the goal of a terminal is to transmit information which will ultimately be converted to a machine-readable format, and to return human-readable responses.
What most of us coming to computing currently think of as a "terminal" is actually a virtual recreation of a physical machine known as a terminal which used to be the method by which data would be input to and read from a computer processor, right? And a text-editor an application that creates a virtual type-writer out of the keyboard, monitor and processing capabilities of the operating system and included libraries of programs.
An application like the mac OS terminal or even the tty we use to engage with another server via and ssh connection is actually creating a virtual terminal through which we can engage with the processor, but sending information to stdin and receiving from stdout and strerr. When the letters we type appear in the terminal screen, it is because it is being "echoed" back into the terminal window.
So there's no reason to expect that the relationship between python or any other language and a terminal, would by default block the input stream coming from a terminal.
The above code uses pythons exception handling to provide two alternative ways of flushing the input stream prior to some activity on behalf of the program. On OSX platform the code:
import sys, termios
termios.tcflush(sys.stdin, termios.TCIOFLUSH)
imports the system so we can have access to stdin, and termios, which is python's module for managing the POSIX (LINUX, UNIX) application which actually manages TWO virtual terminals - one between itself and the user, and another between itself and the operating system. tcflush appears to be a function that accepts at least two parameters - the first being WHICH stream to flush - the file descriptor (fd), and the second being the queue to flush. I'm not sure what the difference is between the file descriptor and queue is in this case, except that maybe the fd contains data that hasn't yet been added to the queue and the queue contains data that is no longer contained in the fd.
msvcrt is the python module for interacting with (managing) whatever Windows version of a terminal is, and I guess msvcrt.kbhit() and msvcrt.getch() are functions for flushing it's input queue.
The UNIX and Windows calls of the function could be swapped so that rather than saying, try: doing it the windows way and if an ImportError is raised to it the UNIX was, we try: the UNIX way first:
class FlushInput(object):
def flush_input(self):
try:
import sys, termios
termios.tcflush(sys.stdin, termios.TCIOFLUSH)
except ImportError:
import msvcrt
while msvcrt.kbhit():
msvcrt.getch()
Here's a termios introduction that helped clarify the process.
Related
import sh
sh.vim("lalala")
does not show the vim editor in my console. Setting _bg=False kwarg makes no change (since that's already the default value)
If instead I use the subprocess module, it works:
import subprocess
subprocess.call(["vim", "lalala"])
The problem is that vim expects its stdin to be a TTY, but the pipe created by sh is not a TTY, it's a pipe.
The solution is to not try to intercept vim's standard I/O with pipes. Since intercepting stdio with pipes is the entire purpose of sh, rather than trying to find a way to fight against it, you're better off not using it. Just use the stdlib's subprocess module, which only intercepts stdio if you go out of your way to ask it to:
subprocess.check_call(['vim', 'lalala'])
But notice the TTYs section in the sh docs:
Some applications behave differently depending on whether their standard file descriptors are attached to a TTY or not. For example, git will disable features intended for humans such as colored and paged output when STDOUT is not attached to a TTY. Other programs may disable interactive input if a TTY is not attached to STDIN. Still other programs, such as SSH (without -n), expect their input to come from a TTY/terminal.
By default, sh emulates a TTY for STDOUT but not for STDIN. You can change the default behavior by passing in extra special keyword arguments…
So, if you pass _tty_in=True, then vim's input will be an emulated TTY instead of a pipe.
But that still isn't going to do much good. It'll allow vim to run, but it'll run using the fake TTY created by sh for its input and output, which I'm pretty sure is not what you want. (If you were looking to send it control sequences and capture and process the control sequences it sends back, it would almost certainly be simpler to just script ed—or, better, sed—instead…)
So why aren't you getting some kind of error message or other sane behavior?
Really, that's down to vim. If you try the same thing with emacs, or any app that uses curses, and many other TTY apps, they'll write an error message to stderr and exit with 1, so you'll see something like this:
ErrorReturnCode_1:
RAN: '/usr/bin/emacs -nw'
STDOUT:
STDERR:
emacs: standard input is not a tty
I'm running an external process and I need to get the stdout immediately so I can push it to a textview, on GNU/Linux I can use "usePTY=True" to get the stdout by line, unfortunately usePTY is not available on windows.
I'm fairly new to twisted, is there a way to achieve the same result on Windows with some twisted (or python maybe) magic stuff?
on GNU/Linux I can use "usePTY=True" to get the stdout by line
Sort of! What usePTY=True actually does is create a PTY (a "pseudo-terminal" - the thing you always get when you log in to a shell on GNU/Linux unless you have a real terminal which no one does anymore :) instead of a boring old pipe. A PTY is a lot like a pipe but it has some extra features - but more importantly for you, a PTY is strongly associated with interactive sessions (ie, a user) whereas a pipe is pretty strongly associated with programmatic uses (think foo | bar - no user ever sees the output of foo).
This means that people tend to use existence of a PTY as stdout as a signal that they should produce output in a timely manner - because a human is waiting to see it. On the flip side, the existence of a regular old pipe as stdout is taken as a signal that another program is consuming the output and they should instead produce output in the most efficient way possible.
What this tends to mean in practice is that if a program has a PTY then it will line buffer its output and if it has a pipe then it will "block" buffer its output (usually gather up about 4kB of data before writing any of it) - because line buffering is less efficient.
The thing to note here is that it is the program you are running that does this buffering. Whether you pass usePTY=True or usePTY=False makes no direct difference to that buffering: it is just a hint to the program you are running what kind of output buffering it should do.
This means that you might run programs that block buffer even if you pass usePTY=True and vice versa.
However... Windows doesn't have PTYs. So programs on Windows can't consider PTYs as a hint for how to buffer their output.
I don't actually know if there is another hint that it is conventional for programs to respect on Windows. I've never come across one, at least.
If you're lucky, then the program you're running will have some way for you to request line-buffered output. If you're running Python, then it does - the PYTHONUNBUFFERED environment variable controls this, as does the -u command line option (and I think they both work on Windows).
Incidentally, if you plan to pass binary data between the two processes, then you probably also want to put stdio into binary mode in the child process as well:
import os, sys, mscvrt
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
Can my python script spawn a process that will run indefinitely?
I'm not too familiar with python, nor with spawning deamons, so I cam up with this:
si = subprocess.STARTUPINFO()
si.dwFlags = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.CREATE_NEW_CONSOLE
subprocess.Popen(executable, close_fds = True, startupinfo = si)
The process continues to run past python.exe, but is closed as soon as I close the cmd window.
Using the answer Janne Karila pointed out this is how you can run a process that doen't die when its parent dies, no need to use the win32process module.
DETACHED_PROCESS = 8
subprocess.Popen(executable, creationflags=DETACHED_PROCESS, close_fds=True)
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying CreateProcess function.
This question was asked 3 years ago, and though the fundamental details of the answer haven't changed, given its prevalence in "Windows Python daemon" searches, I thought it might be helpful to add some discussion for the benefit of future Google arrivees.
There are really two parts to the question:
Can a Python script spawn an independent process that will run indefinitely?
Can a Python script act like a Unix daemon on a Windows system?
The answer to the first is an unambiguous yes; as already pointed out; using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword will suffice:
import subprocess
independent_process = subprocess.Popen(
'python /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
Note that, at least in my experience, CREATE_NEW_CONSOLE is not necessary here.
That being said, the behavior of this strategy isn't quite the same as what you'd expect from a Unix daemon. What constitutes a well-behaved Unix daemon is better explained elsewhere, but to summarize:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The reality of the situation is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Rolling your application into a Windows service, though a viable alternative in many cases, also doesn't quite fit. The same is true of using pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries). In particular, they fail to improve the situation for signal handling, and they cannot easily launch an application from a terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing". Additionally, Windows services require installation, which -- though perfectly possible to do quickly at runtime when you first call up your "daemon" -- modifies the user's system (registry, etc), which would be highly unexpected if you're coming from a Unix world.
In light of that, I would argue that launching a pythonw.exe subprocess using subprocess.CREATE_NEW_PROCESS_GROUP is probably the closest Windows equivalent for a Python process to emulate a traditional Unix daemon. However, that still leaves you with the added challenge of signal handling and startup communications (not to mention making your code platform-dependent, which is always frustrating).
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above strategy. It also implements signal handling (for both Unix and Windows systems), and allows you to pass objects to the "daemon" process using pickle. Best of all, it has a cross-platform API:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
For that purpose you could daemonize your python process or as you are using windows environment you would like to run this as a windows service.
You know i like to hate posting only web-links:
But for more information according to your requirement:
A simple way to implement Windows Service. read all comments it will resolve any doubt
If you really want to learn more
First read this
what is daemon process or creating-a-daemon-the-python-way
update:
Subprocess is not the right way to achieve this kind of thing
There's a similar question to mine on [this thread][1].
I want to send a command to my subprocess, interpret the response, then send another command. It would seem a shame to have to start a new subprocess to accomplish this, particularly if subprocess2 must perform many of the same tasks as subprocess1 (e.g. ssh, open mysql).
I tried the following:
subprocess1.stdin.write([my commands])
subprocess1.stdin.flush()
subprocess1.stout.read()
But without a definite parameter for bytes to read(), the program gets stuck executing that instruction, and I can't supply an argument for read() because I can't guess how many bytes are available in the stream.
I'm running WinXP, Py2.7.1
EDIT
Credit goes to #regularfry for giving me the best solution for my real intention (read the comments in his response, as they pertain to accomplishing my goal through an SSH tunnel). (His/her answer has been voted up.) For the benefit of any viewer who hereafter comes for an answer to the title question, however, I've accepted #Mike Penningtion's answer.
Your choices are:
Use a line-oriented protocol (and use readline() rather than read()), and ensure that every possible line sent is a valid message;
Use read(1) and a parser to tell you when you've read a full message; or
Pickle message objects into the stream from the subprocess, then unpickle them in the parent. This handles the message length problem for you.
#JellicleCat, I'm following up on the comments. I believe wexpect is a part of sage... AFAIK, it is not packaged separately, but you can download wexpect here.
Honestly, if you're going to drive programmatic ssh sessions, use paramiko. It is supported as an independent installation, has good packaging, and should install natively on windows.
EDIT
Sample paramiko script to cd to a directory, execute an ls and exit... capturing all results...
import sys
sys.stderr = open('/dev/null') # Silence silly warnings from paramiko
import paramiko as pm
sys.stderr = sys.__stderr__
import os
class AllowAllKeys(pm.MissingHostKeyPolicy):
def missing_host_key(self, client, hostname, key):
return
HOST = '127.0.0.1'
USER = ''
PASSWORD = ''
client = pm.SSHClient()
client.load_system_host_keys()
client.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
client.set_missing_host_key_policy(AllowAllKeys())
client.connect(HOST, username=USER, password=PASSWORD)
channel = client.invoke_shell()
stdin = channel.makefile('wb')
stdout = channel.makefile('rb')
stdin.write('''
cd tmp
ls
exit
''')
print stdout.read()
stdout.close()
stdin.close()
client.close()
This approach will work (I've done this) but will take some time and it uses Unix-specific calls. You'll have to abandon the subprocess module and roll your own equivalent based on fork/exec and os.pipe().
Use the fcntl.fcntl function to place the stdin/stdout file descriptors (read and write) for your child process into non-blocking mode (O_NONBLOCK option constant) after creating them with os.pipe().
Use the select.select function to poll or wait for availability on your file descriptors. To avoid deadlocks you will need to use select() to ensure that writes will not block, just like reads. Even still, you must account for OSError exceptions when you read and write, and retry when you get EAGAIN errors. (Even when using select before read/write, EAGAIN can occur in non-blocking mode; this is a common kernel bug that has proven difficult to fix.)
If you are willing to implement on the Twisted framework, they have supposedly solved this problem for you; all you have to do is write a Process subclass. But I haven't tried that myself yet.
Is there a way to programmatically interrupt Python's raw_input? Specifically, I would like to present a prompt to the user, but also listen on a socket descriptor (using select, for instance) and interrupt the prompt, output something, and redisplay the prompt if data comes in on the socket.
The reason for using raw_input rather than simply doing select on sys.stdin is that I would like to use the readline module to provide line editing functionality for the prompt.
As far as I know... "Sort of".
raw_input is blocking so the only way I can think of is spawning a subprocess/thread to retrieve the input, and then simply communicate with the thread/subprocess. It's a pretty dirty hack (at least it seems that way to me), but it should work cross platform. The other alternative, of course, is to use either the curses module on linux or get this one for windows.