I'm using boto.manage.cmdshell to create an SSH connection to EC2 instances. Currently every time the user has to enter its password to encrypt the pkey (e.g. ~/.ssh/id_rsa).
Now I want to make the work-flow more convenient for the users and support ssh-agent.
So far I tried without any success:
set ssh_key_file to None when creating FakeServer:
The result was: SSHException('Key object may not be empty')
set ssh_pwd to None when creating SSHClient:
The result was: paramiko.ssh_exception.PasswordRequiredException: Private key file is encrypted
Is there a way to use ssh-agent with boto.manage.cmdshell? I know that paramiko supports it, which boto is using.
(There's another stackoverflow page with some related answers)
Can't get amazon cmd shell to work through boto
However, you're definitely better using per-person SSH keys. But if you have those, are they in the target host's authorized_keys file? If so, users just add their key normally with ssh-add (in an ssh-agent session, usually the default in Linux). You need to test with ssh itself first, so that ssh-agent/-add issues are clearly resolved beforehand.
Once certain they work with ssh normally, the problem is whether boto thought ssh-agent at all. Paramiko's SSHClient() can, if I remember correctly - the paramiko code I remember looks roughly like:
paramiko.SSHClient().connect(host, timeout=10, username=user,
key_filename=seckey, compress=True)
The seckey was optional, so the key_filename would be empty, and that invoked checking the ssh-agent. Boto's version seems to want to force using a private key file with an explicit call like this, I think with the idea that each instance will have an assigned key and password to decrypt it:
self._pkey = paramiko.RSAKey.from_private_key_file(server.ssh_key_file,
password=ssh_pwd)
If so, it means that using boto directly conflicts with using ssh-agent and the standard model of per-user logins and logging of connections by user.
The paramiko.SSHClient() is much more capable, and documents ssh-agent support explicitly (from pydoc paramiko.SSHClient):
Authentication is attempted in the following order of priority:
- The C{pkey} or C{key_filename} passed in (if any)
- Any key we can find through an SSH agent
- Any "id_rsa" or "id_dsa" key discoverable in C{~/.ssh/}
- Plain username/password auth, if a password was given
Basically, you'd have to use paramiko instead of boto.
We had one issue with paramiko: The connection would not be ready immediately in many cases, requiring sending a test command through and checkout output before sending real commands. Part of this was that we'd start firing off SSH commands (with paramiko) right after creating and EC2 or VPC instance, so there was no guarantee it'd be listening for an SSH connect, and paramiko would tend to lose commands delivered too soon. We used some code like this to cope:
def SshCommand(**kwargs):
'''
Run a command on a remote host via SSH.
Connect to the given host=<host-or-ip>, as user=<user> (defaulting to
$USER), with optional seckey=<secret-key-file>, timeout=<seconds>
(default 10), and execute a single command=<command> (assumed to be
addressing a unix shell at the far end.
Returns the exit status of the remote command (otherwise would be
None save that an exception should be raised instead).
Example: SshCommand(host=host, user=user, command=command, timeout=timeout,
seckey=seckey)
'''
remote_exit_status = None
if debug:
sys.stderr.write('SshCommand kwargs: %r\n' % (kwargs,))
paranoid = True
host = kwargs['host']
user = kwargs['user'] if kwargs['user'] else os.environ['USER']
seckey = kwargs['seckey']
timeout = kwargs['timeout']
command = kwargs['command']
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
time_end = time.time() + int(timeout)
ssh_is_up = False
while time.time() < time_end:
try:
ssh.connect(host, timeout=10, username=user, key_filename=seckey,
compress=True)
if paranoid:
token_generator = 'echo xyz | tr a-z A-Z'
token_result = 'XYZ' # possibly buried in other text
stdin, stdout, stderr = ssh.exec_command(token_generator)
lines = ''.join(stdout.readlines())
if re.search(token_result, lines):
ssh_is_up = True
if debug:
sys.stderr.write("[%d] command stream is UP!\n"
% time.time())
break
else:
ssh_is_up = True
break
except paramiko.PasswordRequiredException as e:
sys.stderr.write("usage idiom clash: %r\n" % (e,))
return False
except Exception as e:
sys.stderr.write("[%d] command stream not yet available\n"
% time.time())
if debug:
sys.stderr.write("exception is %r\n" % (e,))
time.sleep(1)
if ssh_is_up:
# ideally this is where Bcfg2 or Chef or such ilk get called.
# stdin, stdout, stderr = ssh.exec_command(command)
chan = ssh._transport.open_session()
chan.exec_command(command)
# note that out/err doesn't have inter-stream ordering locked down.
stdout = chan.makefile('rb', -1)
stderr = chan.makefile_stderr('rb', -1)
sys.stdout.write(''.join(stdout.readlines()))
sys.stderr.write(''.join(stderr.readlines()))
remote_exit_status = chan.recv_exit_status()
if debug:
sys.stderr.write('exit status was: %d\n' % remote_exit_status)
ssh.close()
if None == remote_exit_status:
raise SSHException('remote command result undefined')
return remote_exit_status
We were also trying to enforce not logging into prod directly, so this particular wrapper (an ssh-send-command script) encourage scripting despite the vagaries of whether Amazon had bothered to start the instance in a timely fashion.
I found a solution to my problem by creating a class SSHClientAgent which inherited from boto.manage.cmdshell.SSHClient and overwrites the __init__(). In the new __init__() I replaced the call to paramiko.RSAKey.from_private_key_file() with None.
Here is my new class:
class SSHClientAgent(boto.manage.cmdshell.SSHClient):
def __init__(self, server,
host_key_file='~/.ssh/known_hosts',
uname='root', timeout=None, ssh_pwd=None):
self.server = server
self.host_key_file = host_key_file
self.uname = uname
self._timeout = timeout
# replace the call to get the private key
self._pkey = None
self._ssh_client = paramiko.SSHClient()
self._ssh_client.load_system_host_keys()
self._ssh_client.load_host_keys(os.path.expanduser(host_key_file))
self._ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.connect()
In my function where I create the ssh connection I check for the environment variable SSH_AUTH_SOCK and decide which ssh client to create.
Related
Because this question seems to aim somewhere else I am going to point my problem here:
In my python script I am using multiple requests to a remote server using ssh:
def ssh(command):
command = 'ssh SERVER "command"'
output = subprocess.check_output(
command,
stderr=subprocess.STDOUT,
shell=True,
universal_newlines=True
)
return output
here I will get the content of file1 as output.
I have now multiple methods which use this function:
def show_one():
ssh('cat file1')
def show_two():
ssh('cat file2')
def run():
one = show_one()
print(one)
two = show_two()
print(two)
Executing run() will open and close the ssh connection for each show_* method which makes it pretty slow.
Solutions:
I can put:
Host SERVER
ControlMaster auto
ControlPersist yes
ControlPath ~/.ssh/socket-%r#%h:%p
into my .ssh/config but I would like to solve this within python.
There is the ssh flag -T to keep a connection open, and in the before mentioned Question one answer was to use this with Popen() and p.communicate() but it is not possible to get the output between the communicates because it throws an error ValueError: Cannot send input after starting communication
I could somehow change my functions to execute a single ssh command like echo "--show1--"; cat file1; echo "--show2--"; cat file2 but this looks hacky to me and I hope there is a better method to just keep the ssh connection open and use it like normal.
What I would like to have: For example a pythonic/bashic to do the same as I can configure in the .ssh/config (see 1.) to declare a specific socket for the connection and explicitly open, use, close it
Try to create ssh object from class and pass it to the functions:
import paramiko
from pythonping import ping
from scp import SCPClient
class SSH():
def __init__(self, ip='192.168.1.1', username='user', password='pass',connect=True,Timeout=10):
self.ip = ip
self.username = username
self.password = password
self.Timeout=Timeout
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
if connect:
self.OpenConnection()
self.scp = SCPClient(self.ssh.get_transport())
def OpenConnection(self):
try:
skip_ping = False
ping_res=False
log.info('Sending ping to host (timeout=3,count=3) :'+self.ip)
try:
PingRes = ping(target=self.ip,timeout=3,count=3, verbose=True)
log.info('Ping to host result :' + str(PingRes.success()))
ping_res=PingRes.success()
except:
skip_ping=True
if ping_res or skip_ping:
log.info('Starting to open connection....')
self.ssh.connect(hostname=self.ip, username=self.username, password=self.password, timeout=self.Timeout, auth_timeout=self.Timeout,banner_timeout=self.Timeout)
self.scp = SCPClient(self.ssh.get_transport())
log.info('Connection open')
return True
else:
log.error('ssh OpenConnection failed: No Ping to host')
return False
myssh = SSH(ip='192.168.1.1',password='mypass',username='myusername')
the ping result is wrapped in try catch because sometimes my machine return an error you can remove it and just verify a ping to the host.
The self.scp is for file transfer.
A "direct-tcpip" request (commonly known as port-forwarding) occurs when you run SSH as ssh user#host -L <local port>:<remote host>:<remote port> and then try to connect over the local port.
I'm trying to implement direct-tcpip on a custom SSH server, and Paramiko offers the check_channel_direct_tcpip_request in the ServerInterface class in order to check if the "direct-tcpip" request should be allowed, which can be implemented as follows:
class Server(paramiko.ServerInterface):
# ...
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
return paramiko.OPEN_SUCCEEDED
However, when I use the aforementioned SSH command, and connect over the local port, nothing happens, probably because I need to implement the connection handling myself.
Reading the documentation, it also appears that the channel is only opened after OPEN_SUCCEDED has been returned.
How can I handle the direct-tcpip request after returning OPEN_SUCCEEDED for the request?
You indeed do need to set up your own connection handler. This is a lengthy answer to explain the steps I took - some of it you will not need if your server code already works. The whole running server example in its entirety is here: https://controlc.com/25439153
I used the Paramiko example server code from here https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py as a basis and implanted some socket code on that. This does not have any error handling, thread related niceties or anything else "proper" for that matter but it allows you to use the port forwarder.
This also has a lot of things you do not need as I did not want to start tidying up a dummy example code. Apologies for that.
To start with, we need the forwarder tools. This creates a thread to run the "tunnel" forwarder. This also answers to your question where you get your channel. You accept() it from the transport but you need to do that in the forwarder thread. As you stated in your OP, it is not there yet in the check_channel_direct_tcpip_request() function but it will be eventually available to the thread.
def tunnel(sock, chan, chunk_size=1024):
while True:
r, w, x = select.select([sock, chan], [], [])
if sock in r:
data = sock.recv(chunk_size)
if len(data) == 0:
break
chan.send(data)
if chan in r:
data = chan.recv(chunk_size)
if len(data) == 0:
break
sock.send(data)
chan.close()
sock.close()
class ForwardClient(threading.Thread):
daemon = True
# chanid = 0
def __init__(self, address, transport, chanid):
threading.Thread.__init__(self)
self.socket = socket.create_connection(address)
self.transport = transport
self.chanid = chanid
def run(self):
while True:
chan = self.transport.accept(10)
if chan == None:
continue
print("Got new channel (id: %i).", chan.get_id())
if chan.get_id() == self.chanid:
break
peer = self.socket.getpeername()
try:
tunnel(self.socket, chan)
except:
pass
Back to the example server code. Your server class needs to have transport as a parameter, unlike in the example code:
class Server(paramiko.ServerInterface):
# 'data' is the output of base64.b64encode(key)
# (using the "user_rsa_key" files)
data = (
b"AAAAB3NzaC1yc2EAAAABIwAAAIEAyO4it3fHlmGZWJaGrfeHOVY7RWO3P9M7hp"
b"fAu7jJ2d7eothvfeuoRFtJwhUmZDluRdFyhFY/hFAh76PJKGAusIqIQKlkJxMC"
b"KDqIexkgHAfID/6mqvmnSJf0b5W8v5h2pI/stOSwTQ+pxVhwJ9ctYDhRSlF0iT"
b"UWT10hcuO4Ks8="
)
good_pub_key = paramiko.RSAKey(data=decodebytes(data))
def __init__(self, transport):
self.transport = transport
self.event = threading.Event()
Then you will override the relevant method and create the forwarder there:
def check_channel_direct_tcpip_request(self, chanid, origin, destination):
print(chanid, origin, destination)
f = ForwardClient(destination, self.transport, chanid)
f.start()
return paramiko.OPEN_SUCCEEDED
You need to add transport parameter to the creation of the server class:
t.add_server_key(host_key)
server = Server(t)
This example server requires you to have a RSA private key in the directory named test_rsa.key. Create any RSA key there, you do not need it but I did not bother to strip the use of it off the code.
You can then run your server (runs on port 2200) and issue
ssh -p 2200 -L 2300:www.google.com:80 robey#localhost
(password is foo)
Now when you try
telnet localhost 2300
and type something there, you will get a response from Google.
I am looping over a number of hosts (about 400) to get some information from the systems. most of them have ssh keys so no problem there. Only a handful don't have ssh keys and come back with a password. Now I want to detect when I'm asked for a password, kill the ssh process and continue to the next host.
here's the code for the ssh part
def get_os_info(hostname, command, user=None):
remote = "%s" % hostname if user is None else "%s#%s" %(user, hostname)
ssh = subprocess.Popen(["ssh", remote, command],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
return None
else:
return result
I am not able to use paramiko.
Instead of using trying to detect for a password, instead avoid prompting for it, look at Batch Mode (with Open SSH):
ssh -oBatchMode=yes
Which is designed to prevent prompting for passwords when calling from scripts. Now you just need to detect if connected or not.
(although, as your comment at the bottom suggest, using paramiko is probably technically better solution)
Fairly new to python. I'm able to ssh to a switch and get the data needed using the below script but I need to pull this information from 100+ switches. What do I need to add to this script to achieve that? Thanks
from paramiko import client
class ssh:
client = None
def __init__(self, address, username, password):
# Let the user know we're connecting to the server
print("Connecting to switch.")
# Create a new SSH client
self.client = client.SSHClient()
# The following line is required if you want the script to be able to access a server that's not yet in the known_hosts file
self.client.set_missing_host_key_policy(client.AutoAddPolicy())
# Make the connection
self.client.connect(address, username=username, password=password, look_for_keys=False)
def sendCommand(self, command):
if(self.client):
stdin, stdout, stderr = self.client.exec_command(command)
while not stdout.channel.exit_status_ready():
# Print data when available
if stdout.channel.recv_ready():
alldata = stdout.channel.recv(1024)
prevdata = b"1"
while prevdata:
prevdata = stdout.channel.recv(1024)
alldata += prevdata
print(str(alldata, "utf8"))
else:
print("Connection not opened.")
connection = ssh("x.x.x.x", "user", "pwd")
connection.sendCommand("show int status | e not")
swiches mean instance? so you want to run command on muti-machine? your can try fabric.
#roles("instance-group")
#task
def upload():
"""上传代码到测试服务器"""
local("tar -zc -f /tmp/btbu-spider.tar.gz ../BTBU-Spider")
put("/tmp/btbu-spider.tar.gz", "/tmp/")
local('rm /tmp/btbu-spider.tar.gz')
run("tar -zx -f /tmp/btbu-spider.tar.gz -C ~/test/")
then you can define "instance-group" ssh ip, you can call this function from local. but actually all this command are run at remote instance(not local function).
Add the information about all the switches into a configuration file, for example like:
address1,username1,password1
address2,username2,password2
Each switch can be on a new line and arguments can be separated by just using a comma for ease of use. Then, open the file and read lines one by one and parse them:
with open('switches.conf', 'r') as f:
for line in f: # Read line by line
params = line.split(',') # Split the three arguments by comma
address = params[0]
username = params[1]
password = params[2]
connection = ssh(address, username, password)
connection.sendCommand("show int status | e not")
with open(address, 'w+') as wf: # If you have to write any information about the switches, suggesting this method
wf.write(..information to write..)
That must be done in the cycle. You can just call two functions in that cycle, the function that initializes the SSH connection and the function that gets the information, while this could be your main function. Of course the example for configuration is just a guideline, it has a lot of flaws, especially security ones, so you can make it however you want. Anyway, the idea is that you just need a cycle to do that and by having a configuration file to read from makes it a lot easier :) And you can save the information to different text files that are named by the addresses of the instances for example, as #Santosh Kumar suggested in the comments.
EDIT: Edited my answer and added an example for the connection and sending the command, as I hadn't noticed it was a class.
You can use parallel-ssh for running ssh cmds on multiple machines. Get it by running `pip install parallel-ssh
from pssh.clients import ParallelSSHClient
hosts = ['switch1', 'switch2']
client = ParallelSSHClient(hosts, user='my_user', password='my_pass')
output = client.run_command('show int status | e not')
for host, host_output in output.items():
print host, "".join(host_output.stdout)
Just using my first paramiko script, we have an opengear console server, so I'm trying to automate setup of any device we plug into it.
The open gear listens for ssh connections on ports, for example a device in port 1 would be 3001. I am connecting to a device on port 8, which works and my script runs, but for some reason, after I get the "Interactive SSH session established" message, I need to hit return on the session to make it run (so I have a ssh session and the script does too, its shared).
It just waits there until I hit return, I've tried sending returns as you can see but they don't work, only a manual return works, which is odd because technically they are the same thing?
import paramiko
import time
def disable_paging(remote_conn):
'''Disable paging on a Cisco router'''
remote_conn.send("terminal length 0\n")
time.sleep(1)
# Clear the buffer on the screen
output = remote_conn.recv(1000)
return output
if __name__ == '__main__':
# VARIABLES THAT NEED CHANGED
ip = '192.168.1.10'
username = 'root'
password = 'XXXXXX'
port = 3008
# Create instance of SSHClient object
remote_conn_pre = paramiko.SSHClient()
# Automatically add untrusted hosts (make sure okay for security policy in your environment)
remote_conn_pre.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
# initiate SSH connection
remote_conn_pre.connect(ip, username=username, password=password,port=port, look_for_keys=False, allow_agent=False)
print "SSH connection established to %s" % ip
# Use invoke_shell to establish an 'interactive session'
remote_conn = remote_conn_pre.invoke_shell()
print "Interactive SSH session established"
time.sleep(1)
remote_conn.send("\n")
# Strip the initial router prompt
#output = remote_conn.recv(1000)
# See what we have
#print output
# Turn off paging
#disable_paging(remote_conn)
# clear any config sessions
is_global = remote_conn.recv(1024)
if ")#" in is_global:
remote_conn.send("end\n")
time.sleep(2)
# if not in enable mode go to enable mode
is_enable = remote_conn.recv(1024)
if ">" in is_enable:
remote_conn.send("enable\n")
time.sleep(1)
remote_conn.send("conf t\n")
remote_conn.send("int g0/0/1\n")
remote_conn.send("ip address 192.168.1.21 255.255.255.0\n")
remote_conn.send("no shut\n")
remote_conn.send("end\n")
# Wait for the command to complete
time.sleep(2)
remote_conn.send("ping 192.168.1.1\n")
time.sleep(1)
output = remote_conn.recv(5000)
print output
I tried this and saw that
is_global = remote_conn.recv(1024)
hangs,
Are you sure '192.168.1.10' sends somthing to be received ?
Try setting a timeout
remote_conn.settimeout(3)
3 seconds for example, do it after this line:
remote_conn = remote_conn_pre.invoke_shell()
this way the recv func does not hang and continues when timeout expires
works for me
first send some command "ls -ltr\n" and then call sleep
remote_conn.send("ls -ltr\n")
time.sleep(1)
Try running your command in a debugger and find out what line is waiting for input. You might also try sending \r or \r\n instead if just \n. Remember the enter key is really ^M
You might also try turning on detailed logging.
import logging
# ...
logging.getLogger("paramiko").setLevel(logging.DEBUG)
ive found another module (netmiko) which does exactly what i want and does all these checks. ive since abandoned trying to do it myself when someone else has already done it better.
use Netmiko! :)