I have the following script
test.py
#!/usr/bin/env python2
from subprocess import Popen, PIPE, STDOUT
proc = Popen(['scp', 'test_file', 'user#192.168.120.172:/home/user/data'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
out, err = proc.communicate(input='userpass\n')
print('stdout: ' + out)
print('stderr: ' + str(err))
which is meant to copy test_file in a remote directory /home/user/data located at 10.0.0.2 using login for a given user user. In order to do that I must use scp. No key authentification is allowed (don't ask why, it's just how things are and I cannot change them).
Even though I am piping userpass to the process I still get a prompt inside the terminal to enter password. I want to just run test.py on the local machine and then the remote gets the file without any user interaction.
I though that I'm not using communicate() correctly so I manually called
proc.stdin.write('userpass\n')
proc.stdin.flush()
out, err = proc.communicate()
but nothing changed and I still got that password prompt.
When scp or ssh attempt to read a password they do not read it from stdin. Instead they open /dev/tty and read the password direct from the connected terminal.
sshpass works by creating its own dummy terminal and spawning ssh or scp in a child process controlled by that terminal. That's basically the only way to intercept the password prompt. The recommended solution is to use public key authentication, but you say you cannot do that.
If as you say you cannot install sshpass and also cannot use a secure form of authentication then about the only thing you can do is re-implement sshpass in your own code. sshpass itself is licensed under the GPL, so if you copy the existing code be sure not to infringe on its copyleft.
Here's the comment from the sshpass source which describes how it manages to spoof the input:
/*
Comment no. 3.14159
This comment documents the history of code.
We need to open the slavept inside the child process, after "setsid", so that it becomes the controlling
TTY for the process. We do not, otherwise, need the file descriptor open. The original approach was to
close the fd immediately after, as it is no longer needed.
It turns out that (at least) the Linux kernel considers a master ptty fd that has no open slave fds
to be unused, and causes "select" to return with "error on fd". The subsequent read would fail, causing us
to go into an infinite loop. This is a bug in the kernel, as the fact that a master ptty fd has no slaves
is not a permenant problem. As long as processes exist that have the slave end as their controlling TTYs,
new slave fds can be created by opening /dev/tty, which is exactly what ssh is, in fact, doing.
Our attempt at solving this problem, then, was to have the child process not close its end of the slave
ptty fd. We do, essentially, leak this fd, but this was a small price to pay. This worked great up until
openssh version 5.6.
Openssh version 5.6 looks at all of its open file descriptors, and closes any that it does not know what
they are for. While entirely within its prerogative, this breaks our fix, causing sshpass to either
hang, or do the infinite loop again.
Our solution is to keep the slave end open in both parent AND child, at least until the handshake is
complete, at which point we no longer need to monitor the TTY anyways.
*/
So what sshpass is doing is opening a pseudo terminal device (using posix_openpt), then forks and in the child process makes the slave the controlling pt for the process. Then it can exec the scp command.
I don't know if you can get this to work from Python, but the good news is the standard library does include functions for working with pseudo terminals: https://docs.python.org/3.6/library/pty.html
Related
I am using the python paramiko module to run a built in parmiko function SSH.execute on a remote server. I want to run a script on the server which will require 4 prompts. I was planning to do a more complex version of this:
ExpectedString = 'ExpectedOutput'
Output = SSH.execute('./runScript')
if Output == ExpectedString:
SSH.execute('Enter this')
else:
raise SomeException
The problem is nothing comes back for output as the server was waiting for a number to entered and the script gets stuck at this SSH.execute command. So even if another SSH.execute command is run after it never gets run! Should I be looking to use something other than paramiko?
You need to interact with the remote script. Actually, SSH.execute doesn't exist, I assume you're talking about exec_command. Instead of just returning the output, it actually gives you wrappers for stdout, stdin and stderr streams. You can directly use these in order to communicate with the remote script.
Basically, this is how you run a command and pass data over stdin (and receive output using stdout):
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("some_script")
stdin.write('expected_input\n')
stdin.flush()
data = stdout.read.splitlines()
You should check for the prompts, of course, instead of relying on good timing.
#leoluk - yep, I understand your problem, both those recommended solutions won't work. The problem, as you said, with exec_command is that you can only read the output once the command completes. So, if you wanted to remotely run the command rm -i *, you won't be able to read which file is to be deleted before you can respond with a "yes" or a "no". The key here is to use invoke_shell. See this youtube link - https://www.youtube.com/watch?v=lLKdxIu3-A4 - this helped and got me going.
I am trying to use subprocess.Popen to control an ssh process and interact with it via pipes, like so:
p=subprocess.Popen(['ssh','-tt','LOGIN#HOSTNAME'], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True)
while True:
(out_stdout,out_stderr)=p.communicate(timeout=10)
if out_stderr:
print(out_stderr)
if not out_stdout:
raise EOFError
print(out_stdout)
This works fine without the '-tt' option to ssh. However the program I need to interact with on the remote side of the ssh breaks if there is no pseudo tty allocated, so I am forced to use it.
What this seems to do is that the p.communicate reads then block indefinitely (or until timeout), even if input is available.
I have rewritten this using lower level calls to io.read, select.select etc to avoid going through Popen.communicate. Select will actually return the file descriptor as ready, but a subsequent io.read to that file descriptor will also block. If I disable 'universal newlines' and set 'bufsize=0' in the Popen call it then works fine, but then I am forced to do binary/unicode conversion and line ending processing myself.
It's worth saying though, that disabling universal_newlines in the p.communicate version also blocks indefinitely, so its not just that.
Any advice on how I can get line buffered input working properly here without having to reimplement everything?
There are library alternatives to subprocess and the SSH binary that are better suited for such tasks.
parallel-ssh:
from pssh.pssh2_client import ParalellSSHClient
client = ParallelSSHClient(['HOSTNAME'], user='LOGIN')
output = client.run_command('echo', use_pty=True)
for host, host_output in output.items():
for line in host_output.stdout:
print(line)
Replace echo with the command you need to run, or leave as-is if no command is required. The libraries require that something is passed in as command even if the remote side executes something automatically.
See also documentation for the single host SSHClient of the same project.
Per documentation, line parsing and encoding are handled by the library which is also cross-platform.
There are others like paramiko and ssh2-python that are lower level and need more code for the equivalent above - see their respective home pages for examples.
Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.
Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over.
I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users.
My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working.
When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges.
I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt.
Is this even possible?
The problem is that su and friends read the password directly from the controlling terminal for the process, not from stdin. The way to get around this is to launch your own "pseudoterminal" (pty). In python, you can do that with the pty module. Give it a try.
Edit: The documentation for python's pty module doesn't really explain anything, so here's a bit of context from the Unix man page for the pty device:
A pseudo terminal is a pair of character devices, a master
device and a slave device. The slave device provides to a process an
interface identical to that described in tty(4). However, whereas all
other devices which provide the interface described in tty(4) have a
hardware device of some sort behind them, the slave device has, instead,
another process manipulating it through the master half of the pseudo
terminal. That is, anything written on the master device is given to the
slave device as input and anything written on the slave device is presented as input on the master device. [emphasis mine]
The simplest way to get your pty working is with pty.fork(), which you use like a regular fork. Here's a simple (REALLY minimal) example. Note that if you read more characters than there are available, your process will deadlock: It will try to read from an open pipe, but the only way for the process at the other end to generate output will be if this process sends it something!
pid, fd = pty.fork()
if pid == 0:
# We're the child process: Switch to running a command
os.execl("/bin/cat", "cat", "-n")
print "Exec failed!!!!"
else:
# We're the parent process
# Send something to the child process
os.write(fd, "Hello, world!\n")
# Read the terminal's echo of what we typed
print os.read(fd, 14) ,
# Read command output
print os.read(fd, 22)
If all goes well you should see this:
Hello, world!
1 Hello, world!
Since this is a learning exercise, here's my suggested reading list for you: man fork, man execl, and python's subprocess and os modules (since you're already running subprocess, you may already know some of this). Keep in mind the difference, in Unix and in python, between a file descriptor (which is just a number) and a file object, which is a python object with methods (in C it's a structure or such). Have fun!
If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail.
There's a similar question to mine on [this thread][1].
I want to send a command to my subprocess, interpret the response, then send another command. It would seem a shame to have to start a new subprocess to accomplish this, particularly if subprocess2 must perform many of the same tasks as subprocess1 (e.g. ssh, open mysql).
I tried the following:
subprocess1.stdin.write([my commands])
subprocess1.stdin.flush()
subprocess1.stout.read()
But without a definite parameter for bytes to read(), the program gets stuck executing that instruction, and I can't supply an argument for read() because I can't guess how many bytes are available in the stream.
I'm running WinXP, Py2.7.1
EDIT
Credit goes to #regularfry for giving me the best solution for my real intention (read the comments in his response, as they pertain to accomplishing my goal through an SSH tunnel). (His/her answer has been voted up.) For the benefit of any viewer who hereafter comes for an answer to the title question, however, I've accepted #Mike Penningtion's answer.
Your choices are:
Use a line-oriented protocol (and use readline() rather than read()), and ensure that every possible line sent is a valid message;
Use read(1) and a parser to tell you when you've read a full message; or
Pickle message objects into the stream from the subprocess, then unpickle them in the parent. This handles the message length problem for you.
#JellicleCat, I'm following up on the comments. I believe wexpect is a part of sage... AFAIK, it is not packaged separately, but you can download wexpect here.
Honestly, if you're going to drive programmatic ssh sessions, use paramiko. It is supported as an independent installation, has good packaging, and should install natively on windows.
EDIT
Sample paramiko script to cd to a directory, execute an ls and exit... capturing all results...
import sys
sys.stderr = open('/dev/null') # Silence silly warnings from paramiko
import paramiko as pm
sys.stderr = sys.__stderr__
import os
class AllowAllKeys(pm.MissingHostKeyPolicy):
def missing_host_key(self, client, hostname, key):
return
HOST = '127.0.0.1'
USER = ''
PASSWORD = ''
client = pm.SSHClient()
client.load_system_host_keys()
client.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
client.set_missing_host_key_policy(AllowAllKeys())
client.connect(HOST, username=USER, password=PASSWORD)
channel = client.invoke_shell()
stdin = channel.makefile('wb')
stdout = channel.makefile('rb')
stdin.write('''
cd tmp
ls
exit
''')
print stdout.read()
stdout.close()
stdin.close()
client.close()
This approach will work (I've done this) but will take some time and it uses Unix-specific calls. You'll have to abandon the subprocess module and roll your own equivalent based on fork/exec and os.pipe().
Use the fcntl.fcntl function to place the stdin/stdout file descriptors (read and write) for your child process into non-blocking mode (O_NONBLOCK option constant) after creating them with os.pipe().
Use the select.select function to poll or wait for availability on your file descriptors. To avoid deadlocks you will need to use select() to ensure that writes will not block, just like reads. Even still, you must account for OSError exceptions when you read and write, and retry when you get EAGAIN errors. (Even when using select before read/write, EAGAIN can occur in non-blocking mode; this is a common kernel bug that has proven difficult to fix.)
If you are willing to implement on the Twisted framework, they have supposedly solved this problem for you; all you have to do is write a Process subclass. But I haven't tried that myself yet.