I guess I'm not clear on what what the function of the getty/agetty/mgetty programs are on a linux/unix machine. I can start a shell on a tty with something like this:
TTY = '/dev/tty3'
cpid = os.fork()
if cpid == 0:
os.closerange(0, 4)
sys.stdin = open(TTY, 'r')
sys.stdout = open(TTY, 'w')
sys.stderr = open(TTY, 'w')
os.execv(('/bin/bash',), ('bash',))
..and if i switch over to tty3, there is a shell running- but some keystrokes are ignored / are never being sent to the shell. the shell knows the TTY settings are not correct because bash will say something like 'unable to open tty, job control disabled'
I know the 'termios' module has functions to change the settings on the TTY, which is what the 'tty' module uses, but i am unable to find an example of python setting the TTY correctly and starting a shell. I feel like it should be something simple, but i don't know where to look.
looking at the source for the *etty programs didn't help me- C looks like greek to me :-/
Maybe im just not looking for the right terms? Anyone replaced the *etty programs with Python in the past and have an explanation they would care to share?
Thanks for entertaining my basic question :)
I can see at least two things you're missing - there may be more:
Firstly, you need to call setsid() in the child process after closing the old standard input/standard output, and before opening the new TTY. This does two important things - it makes your new process the leader of a new session, and it disassociates it from its previous controlling terminal (merely closing that terminal is not sufficient). This will mean that when you open the new tty, it will become the controlling terminal, which is what you want.
Secondly, you need to set the TERM environment variable to match the new tty.
You should have a look at the source of the *tty* programs, to see what they do.
My guess is that they mostly issue a bunch of ioctl commands to initialise the terminal into the mode that programs normally expect (e.g. for login etc). However some of them may also prompt for a username (not password; login does that).
Related
I am dealing with a Python script which does, after some preparation work, launch ssh. My script is actually a small CLI tool. On Unix-like systems, at the end of its life, the Python script replaces itself with the ssh client, so the user can now interact with ssh directly (i.e. run arbitrary commands on the remote machine etc):
os.execvpe('ssh', ['ssh', '-o', 'foo', 'user#host'], os.environ)
Positive surprise & side-note in case you are wondering: Windows 10 actually has a native version of OpenSSH now built-in, so there is a ssh command on this platform.
os.execvpe is present in the Python standard library on Windows, but it does not replace the original (Python) process. The situation is ... somewhat complicated: 1, 2, 3. Bottom line: Windows does not implement the corresponding POSIX semantics for replacing a running process.
The common wisdom is to use subprocess.Popen instead, ok, effectively creating a child process. I can launch the child so that the parent keeps running OR I can launch the child while the parent dies (I think that Windows does support the latter just like Unix-like systems). Either way, the user can not interact with the child in the command line.
Assuming that I keep the parent alive, I now have to write a ton of code to pass user I/O to/from the child through the parent, like so for instance. The latter involves managing streams and even threads, depending on how well it is supposed to behave - a lot of places for potential issues and breakages down the road. I do not like to do this (if I can avoid it).
How can I efficiently replace os.execvpe on Windows in the described scenario?
EDIT (1): Bits and pieces, which may be relevant ...
Handle Inheritance I
Handle Inheritance II
STARTUPINFO in Windows
STARTUPINFO in Windows - for Python
I guess it depends on figuring out how to correctly configure a STARTUPINFO object before passing it into Popen. A command line can in fact be inherited in Windows.
EDIT (2): A partial solution via pywin32 - ssh opens into a second, new cmd window and can be interacted with. The original shell with Python remains open, Python itself quits:
from win32.Demos.winprocess import Process
from shlex import join
Process(join(['ssh', '-o', 'foo', 'user#host']))
A partial and incomplete solution looks about as follows, see TODO comments:
import win32api, win32process, win32con
from shlex import join
si = win32process.STARTUPINFO()
# TODO fix flags
si.dwFlags = win32con.STARTF_USESTDHANDLES ^ win32con.STARTF_USESHOWWINDOW
# inherit stdin, stdout and stderr
si.hStdInput = win32api.GetStdHandle(win32api.STD_INPUT_HANDLE)
si.hStdOutput = win32api.GetStdHandle(win32api.STD_OUTPUT_HANDLE)
si.hStdError = win32api.GetStdHandle(win32api.STD_ERROR_HANDLE)
# TODO fix value?
si.wShowWindow = 1
# TODO set values?
# si.dwX, si.dwY = ...
# si.dwXSize, si.dwYSize = ...
# si.lpDesktop = ...
procArgs = (
None, # appName
join(['ssh', '-o', 'foo', 'user#host']), # commandLine
None, # processAttributes
None, # threadAttributes
1, # bInheritHandles TODO ?
win32process.CREATE_NEW_CONSOLE, # dwCreationFlags
None, # newEnvironment
None, # currentDirectory
si, # startupinfo
)
procHandles = win32process.CreateProcess(*procArgs) # run ...
ssh opens into a second, new cmd.exe window and can be interacted with. The original cmd.exe window with Python in it remains open, Python itself quits, returning control to cmd.exe itself. It is usable, although inconsistent and ugly.
I guess it comes down to configuring win32process.STARTUPINFO correctly, but even after heaving read tons of documentation on it, I am somehow failing to make sense of it ...
You can use subprocess.Popen or or subprocess.call function instead of os.execvpe. They have flag shell which ensures that child process can get stdin.
I have tried in windows using following code:
import os
import subprocess
subprocess.Popen('ssh -o foo user#host', shell=True, env=os.environ)
And it works.
I asked a question related to this several weeks ago on here:
Python, mpg123 and subprocess not properly using stdin.write or communicate
Thanks to help from there I was able to do what I needed at the time. (Didn't call q, but terminated the subprocess to stop it).## Heading ##
Now though I seem to be in another bit of a mess.
from subprocess import Popen, PIPE, STDOUT
p = Popen(["mpg123", "-C", "test.mp3"], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
#wait a few seconds to enter this, "q" without a newline is how the controls for the player work to quit out if it were ran like "mpg123 -C test.mp3" on the command line
p.communicate(input='q')[0]
much like before, I need this to be able to quit out of mpg123 like it would be with it's standard controls (like press 'q' to quit, or '-' to turn volume down, '+' to turn volume up, etc), now I use the code above, which should theoretically work, and it works with similar programs. Does anyone know of a way I can use the controls built into mpg123 (the one accessible by using "mpg123 -C whatever.mp3") using a subprocess? terminate isn't enough anymore as I will need the controls ^_^
EDIT: Many thanks to abarnert for the amazing answer =)
ok, so the new code is simply a slightly modified version of abarnert's answer, however mpg123 doesn't seem to be accepting the commands
import os
import pty
import sys
import time
pid, fd = os.forkpty()
if pid:
time.sleep(5)
os.write(fd, 'b') #this should've restarted the file
time.sleep(5)
os.write(fd, 'q') #unfortunately doesn't quit here =(
time.sleep(5) # quits after this is finished executing
else:
os.spawnl(os.P_WAIT, '/usr/bin/mpg123', '-C', 'TEST file.mp3')
If you really need the controls, you can't just use Popen.
mpg123 only enables terminal control if its stdin is a tty, not if it's a file or pipe. That's why you get this line in the banner:
Terminal control enabled, press 'h' for listing of keys and functions.
And the whole point of Popen (and subprocess, and the POSIX APIs it's built on) is pipes.
So, what can you do about it?
On linux, you can use the pty module. It may also work on other *nix platforms, but it may not—even if it gets built and included in your stdlib. As the docs say:
Because pseudo-terminal handling is highly platform dependent, there is code to do it only for Linux. (The Linux code is supposed to work on other platforms, but hasn’t been tested yet.)
It definitely runs on *BSD platforms on 2.7 and 3.3, and the example in the docs seem to work on both Mac OS X and FreeBSD… but that's as far as I've checked.
Meanwhile, most POSIX platforms will at least have os.forkpty, and that's not much harder, so here's a trivial program that plays the first 5 seconds of a song passed as its first arg:
import os
import pty
import sys
import time
pid, fd = os.forkpty()
if pid:
time.sleep(5)
os.write(fd, 'q')
else:
os.spawnl(os.P_WAIT, # mode
'/usr/local/bin/mpg123', # path
'/usr/local/bin/mpg123', '-C', sys.argv[1]) # args
Note that I used os.spawnl above. This is probably not what you want in a real program; it's for pedagogic purposes, to encourage you to read the docs (and the corresponding manpages) and understand this family of functions.
As the docs explain, this does not use the PATH environment variable, so you need to specify the full path to the program. You can just use spawnlp instead of spawnl to fix this.
Also, spawn may (in fact, always does, although the docs aren't entirely clear) do another fork to execute the child. This really isn't necessary, but spawn does things that you would need to do manually if you just called exec. If you know what you're doing, you may well want to use execl (or execlp) instead of spawnl.
You can even use most of the functionality in subprocess as long as you're careful (do not create any pipes, and remember that you'll end up doing two forks, so make sure to set up the parent/child relationship properly).
Also notice that you need to pass the path to mpg123 twice—once as the path, and then once as the child program's argv[0]. You could also just pass mpg123 the second time. Or, ideally, look at what ps says when you run it from the shell, and pass that. At any rate, you have to pass something as the argv[0]; otherwise, -C ends up being the argv[0], which means mpg123 won't think you gave it a -C flag to enable control keys, but rather than you renamed it to -C and ran it with no flags…
Anyway, you really do need to read the docs to understand what each of these functions does, instead of just treating it like magic code that you don't understand. So, I intentionally used the simplest possible solution to encourage that.
On Windows, there is no such thing as a pty, and no way to do this at all with the facilities built in to Python. You will need to use one of the various third-party libraries for controlling a cmd.exe console (aka DOS prompt) instead.
Based on abarnert's idea, we can open a pseudo-terminal and pass it to subprocess.
import os
import pty
import subprocess
import time
master, slave = os.openpty()
p = subprocess.Popen(['mpg123', '-C', 'music.mp3'], stdin=master)
time.sleep(3)
os.write(slave, 's')
time.sleep(3)
os.write(slave, 's')
time.sleep(6)
os.write(slave, 'q')
I'm trying to port some Python code like the following to Ruby:
import pty
pid, fd = pty.fork
if pid == 0:
# figure out what to launch
cmd = get_command_based_on_user_input()
# now replace the forked process with the command
os.exec(cmd)
else:
# read and write to fd like a terminal
Since I need to read and write to the subprocess like a terminal, I understand that I should use Ruby's PTY module in lieu of Kernel.fork. But it does not seem to have an equivalent fork method; I must pass a command as a string. This is the closest I can get to Python's functionality:
require 'pty'
# The Ruby executable, ready to execute some codes
RUBY = %Q|/proc/#{Process.id}/exe -e "%s"|
# A small Ruby program which will eventually replace itself with another program. Very meta.
cmd = "cmd=get_command_based_on_user_input(); exec(cmd)"
r, w, pid = PTY.spawn(RUBY % cmd)
# Read and write from r and w
Obviously some of that is Linux-specific, and that's fine. And obviously some is pseudo-code, but it's the only approach I can find, and I'm only 80% sure that it will work anyway. Surely Ruby has something cleaner?
The important thing is that "get_command_based_on_user_input()" not block the parent process, which is why I stuck it in the child process.
You're probably looking for http://ruby-doc.org/stdlib-1.9.2/libdoc/pty/rdoc/PTY.html, http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-fork and Create a daemon with double-fork in Ruby.
I'd open a PTY with master process, fork and reattach child to said PTY with STDIN.reopen.
First let me say that I know it's better to use the subprocess module, but I'm editing other people's code and I'm trying to make as few changes as possible, which includes avoiding the importing any new modules. So I'd like to stick to the currently-imported modules (os, sys, and paths) if at all possible.
The code is currently (in a file called postfix-to-mailman.py that some of you may be familiar with):
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'first#place.com'))
sys.exit(0)
This works fine (though I think sys.exit(0) might be never be called and thus be unnecessary).
I believe this replaces the current process with a call to /usr/sbin/sendmail passing it the arguments /usr/sbin/sendmail (for argv[0] i.e. itself) and 'someaddress#someplace.com', then passes the environment of the current process - including the email message in sys.stdin - to the child process.
What I'd like to do is essentially send another copy of the message before doing this. I can't use execv again because then execution will stop. So I've tried the following:
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.spawnv(os.P_WAIT, "/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'other#place.com'))
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail", 'first#place.com'))
sys.exit(0)
However, while it sends the message to other#place.com, it never sends it to first#place.com
This surprised me because I thought using spawn would start a child process and then continue execution in the current process when it returns (or without waiting, if P_NOWAIT is used).
Incidentally, I tried os.P_NOWAIT first, but the message I got at other#place.com was empty, so at least when I used P_WAIT the message came through intact. But it still never got sent to first#place.com which is a problem.
I'd rather not use os.system if I can avoid it because I'd rather not go out to a shell environment if it can be avoided (security issues, possible performance? I admit I'm being paranoid here, but if I can avoid os.system I'd still like to).
The only thing I can think of is that the call to os.spawnv is somehow consuming/emptying the contents of sys.stdin, but that doesn't really make sense either. Ideas?
While it might not make sense, that does appear to be the case
import os
os.spawnv(os.P_WAIT,"/usr/bin/wc", ("/usr/bin/wc",))
os.execv("/usr/bin/wc", ("/usr/bin/wc",))
$ cat j.py | python j.py
4 6 106
0 0 0
In which case you might do something like this
import os
import sys
buf = sys.stdin.read()
wc = os.popen("usr/sbin/sendmail other#place.com","w")
wc.write(buf)
wc.close()
wc = os.popen("usr/sbin/sendmail first#place.com","w")
wc.write(buf)
wc.close()
sys.exit(0)
sys.stdin is a pipe and those aren't seekable so you can never rewind that file-like object to read its contents again. To actually invoke sendmail(1) twice, you need to save the contents of stdin, preferably in a temporary file but if the data is guaranteed to have a limited size you could safe it in memory instead.
But why go through the trouble? Do you specifically need the email copy to be a separately queued email (and if so, why)? Just add the wanted recipient in your original invocation of sendmail(1). The additional recipient will not be seen in the email headers.
if local in ('postmaster', 'abuse', 'mailer-daemon'):
os.execv("/usr/sbin/sendmail", ("/usr/sbin/sendmail",
'first#place.com',
'otheruser#example.com'))
sys.exit(0)
Oh, and the sys.exit(0) line will be executed if os.execv() for some reason fails. This'll happen if /usr/sbin/sendmail cannot be executed, e.g. if the executable file doesn't exist or isn't actually executable. In other words, this is an error condition that you should take care of.
Is it possible to capture Python interpreter's output from a Python script?
Is it possible to capture Windows CMD's output from a Python script?
If so, which librar(y|ies) should I look into?
If you are talking about the python interpreter or CMD.exe that is the 'parent' of your script then no, it isn't possible. In every POSIX-like system (now you're running Windows, it seems, and that might have some quirk I don't know about, YMMV) each process has three streams, standard input, standard output and standard error. Bu default (when running in a console) these are directed to the console, but redirection is possible using the pipe notation:
python script_a.py | python script_b.py
This ties the standard output stream of script a to the standard input stream of script B. Standard error still goes to the console in this example. See the article on standard streams on Wikipedia.
If you're talking about a child process, you can launch it from python like so (stdin is also an option if you want two way communication):
import subprocess
# Of course you can open things other than python here :)
process = subprocess.Popen(["python", "main.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
x = process.stderr.readline()
y = process.stdout.readline()
process.wait()
See the Python subprocess module for information on managing the process. For communication, the process.stdin and process.stdout pipes are considered standard file objects.
For use with pipes, reading from standard input as lassevk suggested you'd do something like this:
import sys
x = sys.stderr.readline()
y = sys.stdin.readline()
sys.stdin and sys.stdout are standard file objects as noted above, defined in the sys module. You might also want to take a look at the pipes module.
Reading data with readline() as in my example is a pretty naïve way of getting data though. If the output is not line-oriented or indeterministic you probably want to look into polling which unfortunately does not work in windows, but I'm sure there's some alternative out there.
I think I can point you to a good answer for the first part of your question.
1. Is it possible to capture Python interpreter's output from a Python
script?
The answer is "yes", and personally I like the following lifted from the examples in the PEP 343 -- The "with" Statement document.
from contextlib import contextmanager
import sys
#contextmanager
def stdout_redirected(new_stdout):
saved_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout.close()
sys.stdout = saved_stdout
And used like this:
with stdout_redirected(open("filename.txt", "w")):
print "Hello world"
A nice aspect of it is that it can be applied selectively around just a portion of a script's execution, rather than its entire extent, and stays in effect even when unhandled exceptions are raised within its context. If you re-open the file in append-mode after its first use, you can accumulate the results into a single file:
with stdout_redirected(open("filename.txt", "w")):
print "Hello world"
print "screen only output again"
with stdout_redirected(open("filename.txt", "a")):
print "Hello world2"
Of course, the above could also be extended to also redirect sys.stderr to the same or another file. Also see this answer to a related question.
Actually, you definitely can, and it's beautiful, ugly, and crazy at the same time!
You can replace sys.stdout and sys.stderr with StringIO objects that collect the output.
Here's an example, save it as evil.py:
import sys
import StringIO
s = StringIO.StringIO()
sys.stdout = s
print "hey, this isn't going to stdout at all!"
print "where is it ?"
sys.stderr.write('It actually went to a StringIO object, I will show you now:\n')
sys.stderr.write(s.getvalue())
When you run this program, you will see that:
nothing went to stdout (where print usually prints to)
the first string that gets written to stderr is the one starting with 'It'
the next two lines are the ones that were collected in the StringIO object
Replacing sys.stdout/err like this is an application of what's called monkeypatching. Opinions may vary whether or not this is 'supported', and it is definitely an ugly hack, but it has saved my bacon when trying to wrap around external stuff once or twice.
Tested on Linux, not on Windows, but it should work just as well. Let me know if it works on Windows!
You want subprocess. Look specifically at Popen in 17.1.1 and communicate in 17.1.2.
In which context are you asking?
Are you trying to capture the output from a program you start on the command line?
if so, then this is how to execute it:
somescript.py | your-capture-program-here
and to read the output, just read from standard input.
If, on the other hand, you're executing that script or cmd.exe or similar from within your program, and want to wait until the script/program has finished, and capture all its output, then you need to look at the library calls you use to start that external program, most likely there is a way to ask it to give you some way to read the output and wait for completion.