Python Paramiko hangs at .recv(1024) - python

I'm having issues with using the invoke shell receiving output from a particular command. The purpose of the script is to log in and check the status of a particular application on linux/unix servers. The problem stems, the list of servers are huge and not all servers have this application. So The script works on the servers that has the application and even pulls the data and prints it to the screen, however when the scrip encounters "-bash: CMD: command not found" it hangs and doesn't iterate through the list.
I also confirmed using wireshark and filtering for the IP of the server ip.addr == x.x.x.x the TCP connection is established therefore ruling out any ACL, FW, and or IP tables along the path. I can't further elaborate on the packet within wireshark as it's encrypted.
I can see the the server communicate with my desktop(client) and sends multiple encrypted packets of multiple lens. The script seems to hang right before the print statement (stuck).
Now granted the most ideal circumstance is to use exec shell, but I want to design this script to be scale-able in the future.
I've used this script before on network appliances and even on serves, the script works when the expected response in uniform meaning, show route, uname -a, those are all expected outcomes, but I think there is something in different to the bash no command error which is causing the issue the receive.
# ***** Open Plain Text
f = open("nfsus.txt")
# ***** Read & Store into Variable
hn=(f.read().splitlines())
f.close()
# ***** Credentials
#username = raw_input("Please Enter Username: ")
#password = getpass.getpass("Please Enter Passwod: ")
# ***** SSH
client=paramiko.SSHClient()
def connect_ssh(hn):
try:
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hn, 22, username, password, look_for_keys=False, allow_agent=False)
print 'Connection Attempting to: '+(hn)
channel = client.get_transport().open_session()
channel.invoke_shell()
channel.sendall("CMD \n")
cmd = channel.recv(1024)
#SCRIPT HANGS HERE! ^
print ('stuck')
#print (cmd)
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
try:
channel.close()
except:
pass
# *****Create Loop through input.txt (contains a list of IP address)
for x in hn:
connect_ssh(x)
I'm new to Python and admit, I suck hard! However, I'm really trying my hardest to understand every detail of scripting. I never wrote anything ever in my life, in python or any other language. I'm excited to learn, but please have patience because I suck! I went through the paramiko documentation, but somethings I haven't yet fully understand and I'm hoping someone here would be cool enough to show this noob the error of my ways.
Thank You,

I haven't looked at Paramiko in a dogs age, but in this case bash (the shell) is sending the error message on "STDERR", not "STDOUT". It may be Paramiko isn't listening to STEDRR, so it's not getting the error messages, or it's trapping them some other way.
ortep#Motte ~
$ foobar
-bash: foobar: command not found
ortep#Motte ~
$ foobar 2>/dev/null
ortep#Motte ~
$
In the second calling of "foobar" I redirected (>) STDERR (2) to the /dev/null device.
You can also redirect it to "STDOUT" thusly:
ortep#Motte ~
$ foobar 2>&1
-bash: foobar: command not found
This looks the same because on the ssh console STDOUT and STDERR are (kinda) the same thing.
So that gives you two options that might help. One is to do:
channel.sendall("CMD 2>&1 \n")
The other is to check for the existence of the command:
channel.sendall("if [[-x /path/to/CMD ]]; then /path/to/CMD/; else echo "CMD not found"; fi \n)
The first one is easier to try, the second one you can expand on the "else" section to give you more information (like "; else echo "CMD not on $(hostname)". Which may or may not be useful).

Related

Why are these python print lines indented? Context: port forwarding with ssh using python Popen

I have this piece of code that is supposed to use subprocess.Popen to forward some ports in the background. It gives unexpectedly indented print statements -- any idea what went wrong here?
Also if you have a better way of port forwarding (in the background) with python, I'm all ears! I've just been opening another terminal and running the SSH command, but surely there's a way to do that programmatically, right?
The code:
import subprocess
print("connecting")
proc = subprocess.Popen(
["ssh", f"-L10005:[IP]:10121",
f"[USERNAME]#[ANOTHER IP]"],
stdout=subprocess.PIPE
)
for _ in range(100):
realtime_output = str(proc.stdout.readline(), "utf-8")
if "[IP]" in realtime_output:
print("connected")
break
... other code that uses the forwarded ports
print("terminating")
proc.terminate()
Expected behavior (normal print lines):
$ python test.py
connecting
connected
terminating
Actual behavior (wacky print lines):
$ python test.py
connecting
connected
terminating
$ [next prompt is here for some reason?]
This is likely because ssh is opening up a full shell on the remote machine (if you type in some commands, they'll probably be run remotely!). You should disable this by passing -N so it doesn't run anything. If you don't ever need to type anything into ssh (i.e. entering passwords or confirming host keys), you can also pass -n so it doesn't read from stdin at all. With that said, it looks like you also can do this entirely within Python with the Fabric library, specifically Connection.forward_local().
The indented line weirdness is due to either ssh or the remote shell changing some terminal settings, one of which adds carriage returns before newlines that get sent to the terminal. When this is disabled, each line will start at the horizontal position of the end of the previous line:
$ stty -onlcr; printf 'foo\nbar\nbaz\n'; stty onlcr
foo
bar
baz
$

Erlang: port to Python instance not responding

I am trying to communicate to an external python process through an Erlang port. First, a port is opened, then a message is sent to the external process via stdin. I am expecting a corresponding reply on the process's stdout.
My attempt looks like this:
% open a port
Port = open_port( {spawn, "python -u -"},
[exit_status, stderr_to_stdout, {line, 1000000}] ).
% send a command to the port
true = port_command( Port, "print( \"Hello world.\" )\n" ).
% gather response
% PROBLEM: no matter how long I wait flushing will return nothing
flush().
% close port
true = port_close( Port ).
% still nothing
flush().
I realize that someone else on Stackoverflow tried to do something similar but the proposed solution apparently doesn't work for me.
Also, I see that a related post on Erlang Central is starting a Python script through an Erlang port but it is not the Python shell itself that is invoked.
I have taken notice of ErlPort but I have a whole script to be executed in Python. If possible, I wouldn't want to break up the script into single Python calls.
Funny enough, doing it with bash is no problem:
Port = open_port( {spawn, "bash"},
[exit_status, stderr_to_stdout, {line, 1000000}] ).
true = port_command( Port, "echo \"Hello world.\"\n" ).
So the above example gives me a "Hello world." on flushing:
3> flush().
Shell got {#Port<0.544>,{data,{eol,"Hello world."}}}
ok
Just what I wanted to see.
Ubuntu 15.04 64 bit
Erlang 18.1
Python 2.7.9
Edit:
I have finally decided to write a script file (with a shebang) to disk and execute the script file instead of piping the script to the language interpreter for some languages (like Python).
I suspect, the problem has to do with the way some interpreters buffer IO, which I just can't work around, making necessary this extra round to disk.
As you've discovered, ports don't do what you'd like for this problem, which is why alternatives like ErlPort exist. An old workaround for this problem is to use netcat to pipe commands into python so that a proper EOF occurs. Here's an example session:
1> PortOpts = [exit_status, stderr_to_stdout, {line,1000000}].
[exit_status,stderr_to_stdout,{line,1000000},use_stdio]
2> Port = open_port({spawn, "nc -l 51234 | python"}, PortOpts).
#Port<0.564>
3> {ok, S} = gen_tcp:connect("localhost", 51234, []).
{ok,#Port<0.565>}
4> gen_tcp:send(S, "print 'hello'\nprint 'hello again'\n").
ok
5> gen_tcp:send(S, "print 'hello, one more time'\n").
ok
6> gen_tcp:close(S).
ok
7> flush().
Shell got {#Port<0.564>,{data,{eol,"hello"}}}
Shell got {#Port<0.564>,{data,{eol,"hello again"}}}
Shell got {#Port<0.564>,{data,{eol,"hello, one more time"}}}
Shell got {#Port<0.564>,{exit_status,0}}
ok
This approach opens a port running netcat as a listener on port 51234 — you can choose whatever port you wish to, of course, as long as its not already in use — with its output piped into python. We then connect to netcat over the local TCP loopback and send python command strings into it, which it then forwards through its pipe to python. Closing the socket causes netcat to exit, which results in an EOF on python's stdin, which in turn causes it to execute the commands we sent it. Flushing the Erlang shell message queue shows we got the results we expected from python via the Erlang port.

pexpect to automate login to pbrun

I am trying to automate pbrun using the following code
ssh user#server.com
pbrun -u user1 bash
pass active directory password
run the command
exit
I created the following script but it's not able to pass the password for pbrun:
import time
import pexpect
child = pexpect.spawn('ssh user#server.com')
child.expect("user#server.com's password:")
child.sendline('Password')
child.expect ('.')
child = pexpect.spawn ('pbrun -u user1 bash')
child.expect ('.*')
time.sleep(10)
child.sendline ('Password') - Active directory password
child.expect ('.*')
child.sendline ('ls')
data = child.readline('ls')
print data
The above code successfully does ssh and runs pbrun but is unable to send the password asked by pbrun. Any help is appreciated.
I was able to achieve this by below script, tried python but was not successful, sharing this script which may be helpful to others.
#!/usr/bin/expect -f
if { $argc<1 } {
send_user "usage: $argv0 <passwdfile> \n"
exit 1
}
set timeout 20
set passwdfile [ open [lindex $argv 0] ]
catch {spawn -noecho ./myscript.sh}
expect "Password:" {
while {[gets $passwdfile passwd] >= 0} {
send "$passwd\r"
}
}
expect "*]$\ " {send "exit\r"}
close $passwdfile
send "ls\r"
expect eof
Run the script as below:
./run.exp passfile.txt
here passfile.txt has the password in text and myscript.sh has the pbrun command
In general it's not a great idea to expect wildcards like . or .* because those can match a partial input and your script will continue and send its next line potentially before the server at the other end is even able to receive/handle it, causing breakage. Be as specific as possible, ideally trying to match the end of whatever the server sends right before it waits for input.
You have access to the string buffers containing what pexpect receives before and after the matched pattern in each child.expect() statement with the following constructs which you can print/process at will:
print child.before
print child.after
You might want to get familiar with them - they're your friends during development/debugging, maybe you can even use them in the actual script implementation.
Using sleeps for timing is not great either - most of the time they'll just unnecessarily slow down your script execution and sooner or later things will move at different/unexpected speeds and your script will break. Better expect patterns eventually with a timeout exception are generally preferred instead - I can't think of a case in which sleeps would be just as (or more) reliable.
Check your script's exact communication using these techniques and adjust your patterns accordingly.

Python subprocess rsync using sshpass and specify port

I've searched around for quite a bit, finding pieces of what I wish to achieve but not fully. I'm making a sync-script to synchronize files between two machines. The script itself is somewhat more advanced than this question (it provides possibility for both sides to request for file deletion and so on, no "master side").
First question
The following bash-command works for me:
rsync -rlvptghe 'sshpass -p <password> ssh -p <port>' <source> <destination>
how can I translate it into a python command to be used with the subprocess object?
I've managed to get the following python to work:
pw = getpass.getpass("Password for remote host: ")
command = ['sshpass', '-p', pw, 'rsync', '-rlvptgh', source, destination]
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while p.poll() is None:
out = p.stdout.read(1)
sys.stdout.write(out)
sys.stdout.flush()
but it doesn't specify port (it uses standard 22, I want another one). To clarify, I wish to use similar code as this one but with the support for a specific port as well.
I have already tried to change the command to:
command = ['sshpass', '-p', pw, 'rsync', '-rlvptghe', 'ssh', '-p', '2222', source, destination]
which gives the following error:
ssh: illegal option -- r
and also many other variations such as for instance:
command = ['rsync', '-rlvptghe', 'sshpass', '-p', pw, 'ssh', '-p', '2222', source, destination]
Which gives the following error (where <source> is the remote host source host to sync from, ie variable source above command declaration):
Unexpected remote arg: <source>
How should I specify this command to nest them according to my first bash command?
Second question
When I've done all my searching I've found lots of frowning upon using a command containing the password for scp/rsync (ie ssh), which I use in my script. My reasoning is that I want to be prompted for a password when I do the synchronization. It is done manually since it gives feedback on filesystem modifications and other things. However, since I do 2 scp and 2 rsync calls I don't want to type the same password 4 times. That is why I use this approach and let python (the getpass module) collect the password one time and then use it for all the 4 logins.
If the script was planned for an automated setup I would of course use certificates instead, I would not save the password in clear text in a file.
Am I still reasoning the wrong way about this? Are there things I could do to strengthen the integrity of the password used? I've already realized that I should suppress errors coming from the subprocess module since it might display the command with the password.
Any light on the problem is highly appreciated!
EDIT:
I have updated question 1 with some more information as to what I'm after. I also corrected a minor copy + paste error in the python code.
Edit 2 explained further that I do have tried the exact same order as the first bash command. That was the first I tried. It doesn't work. The reason for changing the order was because it worked with another order (sshpass first) without specifying port.
I have found one way to solve this for my own needs. It includes invoking a shell to handle the command, which I avoided in the first place. It works for me though, but might not be satisfactory to others. It depends on the environment you want to run the command in. For me this is more or less an extension of the bash shell, I want to do some other things that are easier in python and at the same time run some bash commands (scp and rsync).
I'll wait for a while and if there's no better solution than this I will mark my answer as the answer.
A basic function for running rsync via python with password and port could be:
def syncFiles(pw, source, destination, port, excludeFile=None, dryRun=False, showProgress=False):
command = 'rsync -rlvptghe \'sshpass -p ' + pw + ' ssh -p ' + port + '\' ' + source + ' ' + destination
if excludeFile != None:
command += ' --exclude-from='+excludeFile
if dryRun:
command += ' --dry-run'
if showProgress:
command += ' --progress'
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
while p.poll() is None:
out = p.stdout.read(1)
sys.stdout.write(out)
sys.stdout.flush()
The reason this works is as I wrote because the invoked bash shell handles the command instead. This way I can write the command exactly as I would directly in a shell. I still don't know how to do this without shell=True.
Note that the password is collected from the user with the getpass module:
pw = getpass.getpass("Password for current user on remote host: ")
It is not recommended to store your password in the python file or any other file. If you are looking for an automated solution it is better to use private keys. Answers for such solutions can be found by searching.
To call the scp-command with password the following python should do:
subprocess.check_output(['sshpass', '-p', pw, 'scp', '-P', port, source, destination])
I hope this can be useful to someone who wants to achieve what I am doing.

Best way to pipe the output of a local() to the stdin of a remote run() command in Fabric?

Is there a simple way to pipe output from local commands to remote ones (and vice versa)?
I've always just piped to a file, moved the file over, and then read it...but it seems like there could be an easier way.
For simpler situations, just capturing the output and using string interpolation works:
ip = local('hostname -i')
run('Script was run from ip: %s' % ip)
But when the output either needs escaping to be safe on the command line and/or needs to come from stdin it is a bit trickier.
If the output is bash-safe, then something like run('echo "%s" | mycmd' % ip) would do what I'm looking for (which I guess implies that an equivalent question would be "is there a simple way to bash-escape strings?"), but it seems like there should be a "right way" to provide a remote stdin.
Edit:
To clarify with long-ish inputs there a number of potential problems with simple string interpollation: classic shell problems (eg. the output could contain "; rm -rf /) but also (and more realistically, in my case) the output can contain quotes (both single and double).
I think just doing run("echo '%s' | cmd" % output.replace("'", "'\\''") should work, but there may be edge cases that misses.
As I mentioned above, this seems like the type of thing that fabric could handle more elegantly for me by directly sending a string to the run()'s stdin (though perhaps I've just been spoiled by it handling everything else so elegantly :)
You could send the remote stdin with fexpect, my fabric extension. this also sends a file, but hides it behind an api. You would still have to do the escaping though.
I've done this once in order to send a (binary) stream to a remote server.
It's a bit hackish, as it digs deep into fabric and paramiko's channels, and there may be untested edge cases, but it mostly seems to do the job
def remote_pipe(local_command, remote_command, buf_size=1024*1024):
'''executes a local command and a remote command (with fabric), and
sends the local's stdout to the remote's stdin'''
local_p= subprocess.Popen(local_command, shell=True, stdout=subprocess.PIPE)
channel= default_channel() #fabric function
channel.set_combine_stderr(True)
channel.settimeout(2)
channel.exec_command( remote_command )
try:
read_bytes= local_p.stdout.read(buf_size)
while read_bytes:
channel.sendall(read_bytes)
read_bytes= local_p.stdout.read(buf_size)
except socket.error:
local_p.kill()
#fail to send data, let's see the return codes and received data...
local_ret= local_p.wait()
received= channel.recv(buf_size)
channel.shutdown_write()
channel.shutdown_read()
remote_ret= channel.recv_exit_status()
if local_ret!=0 or remote_ret!=0:
raise Exception("remote_pipe failed. Local retcode: {0} Remote retcode: {1} output: {2}".format(local_ret, remote_ret, received))
In case anyone feels like contributing modifications, this is part of btrfs-send-snapshot
This is a slightly improved version of #goncalopp's answer:
def remote_pipe(local_command, remote_command, buffer_size=1024*1024, channel_timeout=60):
'''executes a local command and a remote command (with fabric), and
sends the local's stdout to the remote's stdin'''
local_process = Popen(local_command, shell=True, stdout=PIPE)
channel = default_channel() # Fabric function
channel.set_combine_stderr(True)
channel.settimeout(channel_timeout)
channel.exec_command(remote_command)
try:
bytes_to_send = local_process.stdout.read(buffer_size)
while bytes_to_send:
channel.sendall(bytes_to_send)
bytes_to_send = local_process.stdout.read(buffer_size)
except socket.error:
# Failed to send data, let's see the return codes and received data...
local_process.kill()
local_returncode = local_process.wait()
channel.shutdown_write()
remote_output = ""
try:
bytes_received = channel.recv(buffer_size)
while bytes_received:
remote_output += bytes_received
bytes_received = channel.recv(buffer_size)
except socket.error:
pass
channel.shutdown_read()
remote_returncode = channel.recv_exit_status()
print(remote_output)
if local_returncode != 0 or remote_returncode != 0:
raise Exception("remote_pipe() failed, local return code: {0}, remote return code: {1}".format(local_returncode, remote_returncode, remote_output))
Apart from readability, the improvement is that it does not abort with a socket timeout in case the remote command outputs less than buffer_size bytes, and that it prints the complete output of the remote command.

Categories