Paramiko read stdout after every stdin [duplicate] - python

I am writing a program in Python which must communicate through SSH with a physical target, and send to this targets some commands automatically (it is for testing).
I start by doing this with Paramiko and everything was perfect until I have to send several commands and when for example the second one must be execute in the context of the first (for example the first one makes cd /mytargetRep and the second one is ./executeWhatIWant). I can't use exec_command to do so, because each exec_command starts a new session.
I try to use a channel with invoke_shell(), but I have an other problem with this one: I don't know when command execution is ended by doing this. I can have some very short (in time) command execution, and some other are really more longer so I need to know when the command execution is over.
I know a workaround it to use exec_command with a shell logic operations such as && or using ;. For example exec_command("cd /mytargetRep && ./executeWhatIWant"). But I can't do that, because it must also be possible to execute some commands manually (I have a minimalist terminal where I can send commands), so for example, the user will make cd /mytargetRep then ./executeWhatIWant and not cd /mytargetRep && ./executeWhatIWant.
So my question is: is there a solution by using Paramiko to send several commands in a same SSH session and be able to know the end of the command execution?
Thanks

It seems that you want to implement an interactive shell, yet you need to control individual commands execution. That's not really possible with just SSH interface. "shell" channel in SSH is black box with an input and output. So there's nothing in Paramiko that will help you implementing this.
If you need to find out when a specific command finishes or where an output of a specific command ends, you need to use features of a shell.
You can solve that by inserting a unique separator (string) in between and search for it in the channel output stream. With a common *nix shells something like this works:
channel = ssh.invoke_shell()
channel.send('cd /mytargetRep\n')
channel.send('echo unique-string-separating-output-of-the-commands\n')
channel.send('./executeWhatIWant\n')
Though I do not really think that you need that very often. Most commands that are needed to make a specific commands working, like cd or set, do not really output anything.
So in most cases you can use SSHClient.exec_command and your code will be a way simpler and more reliable:
Execute multiple commands in Paramiko so that commands are affected by their predecessors
Even if you need to use something seemingly complex like su/sudo, it is still better to stick with SSHClient.exec_command:
Executing command using "su -l" in SSH using Python
For a similar question, see:
Combining interactive shell and recv_exit_status method using Paramiko

Related

Is it possible to use fabric to pass commands to an interactive shell?

I'm trying to automate the following via Fabric:
SSH to a remote host.
Execute a python script (the Django management command dbshell).
Pass known values to prompts that the script generates.
If I were to do this manually, it would like something like:
$ ssh -i ~/.ssh/remote.pem ubuntu#10.10.10.158
ubuntu#10.10.10.158$ python manage.py dbshell
postgres=> Password For ubuntu: _____ # i'd like to pass known data to this prompt
postgres=> # i'd like to pass known data to the prompt here, then exit
=========
My current solution looks something like:
from fabric.api import run
from fabric.context_managers import settings as fabric_settings
with fabric_settings(host_string='10.10.10.158', user='ubuntu', key_filename='~/.ssh/remote.pem'):
run('python manage.py dbshell')
# i am now left wondering if fabric can do what i'm asking....
Replied to Sean via Twitter on this, but the first thing to check out here is http://docs.fabfile.org/en/1.10/usage/env.html#prompts - not perfect but may suffice in some situations :)
The upcoming v2 has a more solid implementation of this feature in the pipe, and that will ideally have room for a more pexpect-like API (meaning, something more serially oriented) as an option too.
You can use Pexpect which runs the system and checks the output, if the output matches given pattern Pexpect can respond as a human typing.

Python Thread Breaking Terminal

Hello minds of stackoverflow,
I've run into a perplexing bug. I have a python script that creates a new thread that ssh's into a remote machine and starts a process. However, this process does not return on its own (and I want it to keep running throughout the duration of my script). In order to force the thread to return, at the end of my script I ssh into the machine again and kill -9 the process. This is working well, expect for the fact that it breaks the terminal.
To start the thread I run the following code:
t = threading.Thread(target=run_vUE_rfal, args=(vAP.IP, vUE.IP))
t.start()
The function run_vUE_rfal is as follows:
cmd = "sudo ssh -ti ~/.ssh/my_key.pem user#%s 'sudo /opt/company_name/rfal/bin/vUE-rfal -l 3 -m -d %s -i %s'" % (vUE_IP, vAP_IP, vUE_IP)
output = commands.getstatusoutput(cmd)
return
It seems when the command is run, it somehow breaks my terminal. It is broken in that instead of creating a new line for each print, it appends the WIDTH of my terminal in whitespace to the end of each line and prints it as seemingly one long string. Also, I am unable to see my keyboard input to that terminal, but it still successfully read. My terminal looks something like this:
normal formatted output
normal formatted output
running vUE-rfal
print1
print2
print3_extra_long
print4
If I replace the body of the run_vUE_rfal function with some simple prints, the terminal does not break. I have many other ssh's and telnets in this script that work fine. However, this is the only one I'm running in a separate thread as it is the only one that does not return. I need to maintain the ability to close the process of the remote machine when my script is finished.
Any explanations to the cause and idea for a fix are much appreciated.
Thanks in advance.
It seems the process you control is changing terminal settings. These are bypassing stderr and stdout - for good reasons. E.g. ssh itself needs this to ask users for passwords even when it's output is being redirected.
A way to solve this could be to use the python-module pexpect (it's a 3rd-party library) to launch your process, as it will create its' own fake-tty you don't care about.
BTW, to "repair" your terminal, use the reset command. As you already noticed, you can enter commands. reset will set the terminal to default settings.

Learning python for security, having trouble with su

Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over.
I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users.
My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working.
When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges.
I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt.
Is this even possible?
The problem is that su and friends read the password directly from the controlling terminal for the process, not from stdin. The way to get around this is to launch your own "pseudoterminal" (pty). In python, you can do that with the pty module. Give it a try.
Edit: The documentation for python's pty module doesn't really explain anything, so here's a bit of context from the Unix man page for the pty device:
A pseudo terminal is a pair of character devices, a master
device and a slave device. The slave device provides to a process an
interface identical to that described in tty(4). However, whereas all
other devices which provide the interface described in tty(4) have a
hardware device of some sort behind them, the slave device has, instead,
another process manipulating it through the master half of the pseudo
terminal. That is, anything written on the master device is given to the
slave device as input and anything written on the slave device is presented as input on the master device. [emphasis mine]
The simplest way to get your pty working is with pty.fork(), which you use like a regular fork. Here's a simple (REALLY minimal) example. Note that if you read more characters than there are available, your process will deadlock: It will try to read from an open pipe, but the only way for the process at the other end to generate output will be if this process sends it something!
pid, fd = pty.fork()
if pid == 0:
# We're the child process: Switch to running a command
os.execl("/bin/cat", "cat", "-n")
print "Exec failed!!!!"
else:
# We're the parent process
# Send something to the child process
os.write(fd, "Hello, world!\n")
# Read the terminal's echo of what we typed
print os.read(fd, 14) ,
# Read command output
print os.read(fd, 22)
If all goes well you should see this:
Hello, world!
1 Hello, world!
Since this is a learning exercise, here's my suggested reading list for you: man fork, man execl, and python's subprocess and os modules (since you're already running subprocess, you may already know some of this). Keep in mind the difference, in Unix and in python, between a file descriptor (which is just a number) and a file object, which is a python object with methods (in C it's a structure or such). Have fun!
If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail.

How to implement "last command" function in a console based python program

So I am working on a console based python(python3 actually) program where I use input(">")to get the command from user.
Now I want to implement the "last command" function in my program - when users press the up arrow on the keyboard they can see their last command.
After some research I found I can use curses lib to implement this but there are two problems.
Curses are not available on Windows.
The other parts of my program use print() to do the output. I don't want to rewrite them with curses.
So are there any others ways to implement the "last command" function? Thanks.
In new versions of python exists nice modile readline for handling user input, and rlcompleter for autocomplete purposes. But I think on Windows it will require installation of readline lib anyway.
What you can do, is to apply some sort of shell history functionality: every command issued by the user would be placed in a list, and then you'd implement a special call (command of your console), let's say 'history' that would print out the list for the user in order as it was being filled in, with increasing number to next to every command. Then, another call (again a special command of your console), let's say '!!' (but really coult be anything, like 'repeat') and a number of the command you want to repeat would fetch the command from the list and execute it without retyping: typing '!! 34' would execute again the command number 34 that can be 'something -a very -b long -c with -d very -e large -f number -g of -h arguments -666'.
I am aware its not exactly the same thing that you wanted, but it is very easy to implement quickly, and provides the command repetition functionality you're after, and should be decent replacement until you figure out how to do it the way you want ;)

Using a channel for multiple commands, replys, and interactive commands?

I'm beginning to learn twisted.conch to automate some tasks over SSH.
I tried to modify the sample sshclient.py from http://www.devshed.com/c/a/Python/SSH-with-Twisted/4/ . It runs 1 command after login and prints captured output.
What I wanted to do is to run a series commands, and maybe decide what to do based on the output.
The problem I ran into is that twisted.conch.ssh.channel.SSHChannel appears to be always closing itself after running a command (such as df -h). The example will sendRequest after channelOpen. Then the channel is always closed after dataReceived no matter what I did.
I'm wondering if this is due to server sending an EOF after the command. And therefore this channel must be closed? Should I just open multiple channels for multiple commands?
Another problem is those interactive commands (such as rm -i somefile). It seems that because the server didn't send EOF, SSHChannel.dataReceived never gets called. How do I manage to capture output in this situation, and what do I do to send back a response?
Should I just open multiple channels for multiple commands?
Yep. That's how SSH works.
SSHChannel.dataReceived never gets called
This doesn't sound like what should happen. Perhaps you can include a minimal example which reproduces the behavior.

Categories