So I'm writing a command line utility and I've run into a problem I'm curious about.
The utility can be called with file arguments or can read from sys.stdin. Originally I was using sys.stdin.isatty() to figure out if data is being piped in, however, I found that if I call the utility remotely via ssh server utility, sys.stdin.isatty() will return false, despite the fact that there is no actual data being piped in.
As a workaround I'm using - as the file argument to force reading from stdin (eg: echo "data_here" | utility -f -), but I'm curious to know if there's a reliable way to tell the difference between a pipe getting data from a process and a pipe that's only open because the call is via ssh.
Systems programming is not my forte, so I'm grateful for any help I can get from you guys.
You can tell if you're being invoked via SSH by checking your environment. If you're being invoked via an SSH connection, the environment variables SSH_CONNECTION and SSH_CLIENT will be set. You can test if they are set with, say:
if "SSH_CONNECTION" in os.environ:
# do something
Another option, if you wanted to just stick with your original approach of sys.stdin.isatty(), would be to to allocate a pseudo-tty for the SSH connection. Normally SSH does this by default if you just SSH in for an interactive session, but not if you supply a command. However, you can force it to do so when supplying a command by passing the -t flag:
ssh -t server utility
However, I would caution you against doing either of these. As you can see, trying to detect whether you should accept input from stdin based on whether it's a TTY can cause some surprising behavior. It could also cause frustration from users if they wanted a way to interactively provide input to your program when debugging something.
The approach of adding an explicit - argument makes it a lot more explicit and less surprising which behavior you get. Some utilities also just use the lack of any file arguments to mean to read from stdin, so that would also be a less-surprising alternative.
According to this answer the SSH_CLIENT or SSH_TTY environment variable should be declared. From that, the following code should work:
import os
def running_ssh():
return 'SSH_CLIENT' in os.environ or 'SSH_TTY' in os.environ
A more complete example would examine the parent processes to check if any of them are sshd which would probably require the psutil module.
Related
I'm pulling and pushing to a github repository with a python script. For the github repository, I need to use a ssh key.
If I do this manually before running the script:
eval $(ssh-agent -s)
ssh-add ~/.ssh/myprivkey
everything works fine, the script works. But, after a while, apparently the key expires, and I have to run those 2 cmds again
The thing is, if I do that inside the python script, with os.system(cmd), it doesn't work, it only works if I do that manually
I know this must be a messy way to use the ssh agent, but I honestly don't know how it works, and I just want the script to work, that's all
The script runs once an hour, just in case
While the normal approach would be to run your Python script in a shell where the ssh agent is already running, you can also consider an alternative approach with sshify.py
# This utility will execute the given command (by default, your shell)
# in a subshell, with an ssh-agent process running and your
# private key added to it. When the subshell exits, the ssh-agent
# process is killed.
Consider defining the ssh key path against a host of github.com in your ssh config file as outlined here: https://stackoverflow.com/a/65791491/14648336
If Linux then at this path ~/.ssh/ create a file called config and input something similar to in the above answer:
Host github.com
HostName github.com
User your_user_name
IdentityFile ~/.ssh/your_ssh_priv_key_file_name
This would save the need for starting an agent each time and also prevent the need for custom environment variables if using GitPython (you mention using Python) as referenced in some other SO answers.
The company I work for uses an archaic information system (Copyright 1991-2001). The system is a Centos machine running an ssh server. There's no access to the back-end or it's data in any way. All data needs to be retrieved through text reports, or input with manual keystrokes. Here's an example of the view you get when you login.
I'm trying to write a python script that will simulate keystrokes to run reports and do trivial tasks. I've already successfully done this with a .cmd file on windows that connects and simulates keystrokes. The problem is that there are some processes that have unpredictable branches (A message sometimes pops up and asks for some information or a key press to verify that you've seen a message). I can predict where a branch might occur, but can't detect if it actually has because my .cmd file is blind to output from the ssh session. (I'm working in windows by the way)
What I'm trying to do is use a python script that uses stdin and makes decisions based on what it sees, but I'm new to how piping works. Piping into my script works, but I'm unsure how to send keystrokes back to the ssh session from the python script. Here's an example of my test script:
import sys
import time
buff=''
try:
while True:
buff += sys.stdin.read(1)
if buff[-5:] == 'press':
print('found word "press"!')
#Send a keystroke back to the ssh session here
buff = ''
except KeyboardInterrupt:
sys.stdout.flush
pass
And here's how I call it:
ssh MyUsername####.###.###.### | python -u pipe_test.py
While it's running, I can't see anything, but I've verified that I can send keystrokes through the terminal with my regular keyboard.
Any ideas on how to output keystrokes to the ssh session?
Should I be doing some completely different, much simpler thing?
FYI: The data sent by the server to the terminal has ASCII escape characters flying all over the place. It's not a nice bash interface or anything like that. Also, I've installed a bunch of Unix command line tools so that I can, for example, ssh from windows.
tl;dr How do I pipe from an ssh session into python, and send keystrokes back to the ssh session from that same python script?
You definitely don't do this with pipes. A Unix pipe is a unidirectional inter-process communications mechanism. You can send data to it, or you can read data from it. But not both (through the same pipe).
It is possible to use pairs of pipes to create co-processes. This is even supported directly in some Unix shells (such as Korn shell and Bash as of version 4 (https://www.gnu.org/software/bash/manual/html_node/Coprocesses.html). However this mechanism is somewhat fragile and prone to deadlock. It works so long as the processes on both sides of this pair of pipes are rigorous in their handling of the pipes and the associated buffering (Actually it's possible for just one end to be rigorous, but even that's tricky). This is not likely to be the case for the programs that you're trying to run remotely.
Someone suggested pexpect which is an excellent choice for controlling a locally spawned terminal or curses application. It's possible to manage a remote process with it, by spawning a local ssh client and controlling that.
However, a better choice for accessing ssh protocols and APIs from Python would be Paramiko. This implements the ssh protocol such that you access the remote sshd process as an API rather than through a client (command line utility).
An advantage of this approach is that you can programmatically manage port redirections, transfer and manage files (as you would with sftp) (including setting permissions and such), and you can execute programs and separately access their standard input, output, and error streams or their pseudo-terminals (pty) as well as fetch the exit codes of remote processes as distinct from the exit code of your local ssh client.
There's even a package paramiko-expect adds an extension to Paramiko to enable more straightforward using of these remote pty objects. (Pexpect provides similar features on the pty controlling local processes). [Caveat: I haven't used Fotis Gimian's package yet; but I've used Paramiko fairly extensively and sometimes wished I had something like it].
As you may have figured out from this answer, the complexity of programmatically dealing with an interactive terminal/text program under Unix (Linux or any of its variants) has to do with details about how that program is written.
Some programs, such as shells, can be completely driven by line oriented input and output to their standard file descriptors (stdin, stdout, and stderr). Others must be controlled through their terminal interfaces (which is accomplished by launching them using a pseudo-terminal environment ... such as the one provided by sshd when it starts a normal interactive process, and those supplied by expect (and pexect and various other modules and utilities inspired by the old TCL/expect) and by your xterm or other terminal windowing programs under any modern OS (even including Cygwin or the "Bash for Windows" or WSL (Windows Support for Linux) under the latest versions of Microsoft Windows).
In general your attempts will need to use one or another approach and pipes are only very crudely useful for the approach using the standard file descriptors. The decision of which approach to use will mostly be driven by the program you're trying to (remotely) control.
I need to make a python script that will do these steps in order, but I'm not sure how to go about setting this up.
SSH into a server
Copy a folder from point A to point B (cp /foo/bar/folder1 /foo/folder2)
mysql -u root -pfoobar (This database is accessible from localhost only)
create a database, do some other mysql stuff in the mysql console
Replaces instances of Foo with Bar in file foobar
Copy and edit a file
Restart a service
The fact that I have to ssh into a server, and THEN do all of this is really confusing me. I looked into the Fabric library, but that seems to do only do 1 command at a time and doesn't keep context from previous commands.
I looked into the Fabric library, but that seems to do only do 1 command at a time and doesn't keep context from previous commands.
Look into Fabric more. It is still probably what you want.
This page has a lot of good examples.
By "context" I'm assuming you want to be able to cd into another directory and run commands from there. That's what fabric.context_managers.cd is for -- search for it on that page.
Sounds like you are doing some sort of remote deployment/configuring. There's a whole world of tools out there to professionally set this up, look into Chef and Puppet.
Alternatively if you're just looking for a quick and easy way of scripting some remote commands, maybe pexpect can do what you need.
Pexpect is a pure Python module for spawning child applications; controlling them; and responding to expected patterns in their output.
I haven't used it myself but a quick glance at its manual suggests it can work with an SSH session fine: https://pexpect.readthedocs.org/en/latest/api/pxssh.html
I have never used Fabric.
My way to solve those kind of issues (before starting to use saltstack) it was using pyexpect, to run the ssh connection, and all the commands that were needed.
maybe the use of a series of sql scripts to work with the database (just to make it easier) would help.
Another way, since you need to access the remote server using ssh, it would be using paramiko to connect and execute commands remotely. It's a bit more complicated when you want to see what's happening on stdout (while with pexpect you will see exactly what's going on).
but it all depends from what you really need.
I'm trying use python's cmd library to create a shell with limited commands. One requirement I have is to be able to run a command that executes an existing shell script which opens an ssh session on a remote machine and from there allows the user to interact with the remote shell as if it was a regular ssh session.
Simply using subprocess.Popen('[/path/to/connect.sh]') works well at least as a starting point except for one issue. You can interact with the remote shell but the input that you type is not shown on stdout...so for example you see the prompt on your stdout but when you type 'ls' you don't see it being typed but when you hit return it works as expected.
I'm trying to wrap my head around how to print the input to stdout and still send it along to the remote ssh session.
EDIT:
Actual code without using cmd was just the one line:
ssh_session = subprocess.Popen(['connect.sh'])
it was fired from a do_* method in a class which extended cmd.Cmd. I think I may end up using paramiko but would still be interested in anyone's input on this.
Assuming you are using a Unix like system, SSH detects if you are on a terminal or not. When it detects that you are not on a terminal, like when using subprocess, it will not echo the characters typed. Instead you might want to use a pseudo-terminal, see pexpect, or pty. This way you can get the output from SSH as if it was running on a true terminal.
When I execute something like:
run('less <somefile>')
Within fabric, it prepends the lines with Out: and interacting with it doesn't work as expected.
If I run it with:
run('cat <something>', pty=False)
The output isn't prepended with anything and I can actually pipe that into less locally, like:
fab less | less
However I'm not sure if that's recommended since I feel it may be taxing on the remote resource since cat will continually be piping back through ssh. Also when I quick less before the whole file is cat'd (it could be over 1GB), I get a broker pipe error.
What would be the recommended way to facilitate this? Should I just use ssh directly like:
ssh <remote host> less <something>
If you're doing interactive work on the remote host, then maybe just using SSH is fine. I think fabric is mostly useful when automating actions.