I am trying to establish a SSH connection between a Windows PC and a Linux server(amazon ec2).
I decided to use Fabric API implemented using python.
I have Putty installed on the Windows PC.
My fabfile script looks like this:
import sys
from fabric.api import *
def testlive():
print 'Test live ...'
run("uptime")
env.use_ssh_config = False
env.host_string = "host.something.com"
env.user = "myuser"
env.keys_filename = "./private_openssh.key"
env.port = 22
env.gateway = "proxyhost:port"
testlive()
I am running Fabric in the same directory with the private key.
I am able to login on this machine using Putty.
The problem: I am constantly asked for Login password for specified user.
Based on other posts(here and here) I already tried:
pass as a list the key file to env.keys_filename
use username#host_string
use env.host instead of env.host_string
How to properly configure Fabric to deal with proxy server and ssh private key file ?
The following should work.
env.key_filename = "./private_openssh.key"
(notice the typo in your attempt)
Fabric's API is best avoided really, way too many bugs and issues (see issue tracker).
You can do what you want in Python with the following:
from __future__ import print_function
from pssh import ParallelSSHClient
from pssh.utils import load_private_key
client = ParallelSSHClient(['host.something.com'],
pkey=load_private_key('private_openssh.key'),
proxy_host='proxyhost',
proxy_port=<proxy port number>,
user='myuser',
proxy_user='myuser')
output = client.run_command('uname')
for line in output['host.something.com'].stdout:
print(line)
ParallelSSH is available from pip as parallel-ssh.
PuTTYgen is what you will use to generate your SSH key then upload the copied SSH key to your Cloud Management portal - See Joyant
You will have to generate and authenticate a private key, to do so, you need PuTTYgen to generate the SSH access using RSA key with password, key comment and conform the key passphrase, here is a step by step guide documentation SSH Access using RSA Key Authentication
Related
When I run this script I receive SSH connection failed: Host key is not trusted error, but even connect to this host to take the key, keep to receive this error.
import asyncio, asyncssh, sys
async def run_client():
async with asyncssh.connect('172.18.17.9', username="user", password="admin", port=9321) as conn:
result = await conn.run('display version', check=True)
print(result.stdout, end='')
try:
asyncio.get_event_loop().run_until_complete(run_client())
except (OSError, asyncssh.Error) as exc:
sys.exit('SSH connection failed: ' + str(exc))
Try adding the known_hosts=None parameter to the connect method.
asyncssh.connect('172.18.17.9', username="user", password="admin", port=9321, known_hosts=None)
From asyncssh documentation here:
https://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SSHClientConnectionOptions
known_hosts (see Specifying known hosts) – (optional) The list of keys
which will be used to validate the server host key presented during
the SSH handshake. If this is not specified, the keys will be looked
up in the file .ssh/known_hosts. If this is explicitly set to None,
server host key validation will be disabled.
With me, it runs smoothly after inserting known_hosts=None
Here's my example when trying the coding sample in Ortega book:
I tried with hostname=ip/username/password of localCentOS, command test is ifconfig
import asyncssh
import asyncio
import getpass
async def execute_command(hostname, command, username, password):
async with asyncssh.connect(hostname, username = username,password=password,known_hosts=None) as connection:
result = await connection.run(command)
return result.stdout
You should always validate the server's public key.
Depending on your use case you can:
Get the servers host keys, bundle them with your app and explicitly pass them to asyncssh (e.g., as string with a path to your known_hosts file).
Manually connect to the server on the command line. SSH will then ask you if you want to trust the server. The keys are then added to ~/.ssh/known_hosts and AsyncSSH will use them.
This is related but maybe not totally your salvation:
https://github.com/ronf/asyncssh/issues/132
The real question you should be asking yourself as you ask this question (help us help you) is where is it all failing? Known-hosts via analogy is like env vars that don't show up when you need them to.
EDIT: Questions that immediately fire. Host key is found but not trusted? Hmm?
EDIT2: Not trying to be harsh towards you but I think it's a helpful corrective. You've got a software library that can find the key but is not known. You're going to come across a lot of scenarios with SSH / shell / env var stuff where things you take for granted aren't known. Think clearly to help yourself and to ask the question better.
I want to be able to ssh into an EC2 instance, and run some shell commands in it, like this.
How do I do it in boto3?
This thread is a bit old, but since I've spent a frustrating afternoon discovering a simple solution, I might as well share it.
NB This is not a strict answer to the OP's question, as it doesn't use ssh. But, one point of boto3 is that you don't have to - so I think in most circumstances this would be the preferred way of achieving the OP's goal, as s/he can use his/her existing boto3 configuration trivially.
AWS' Run Command is built into botocore (so this should apply to both boto and boto3, as far as I know) but disclaimer: I've only tested this with boto3.
def execute_commands_on_linux_instances(client, commands, instance_ids):
"""Runs commands on remote linux instances
:param client: a boto/boto3 ssm client
:param commands: a list of strings, each one a command to execute on the instances
:param instance_ids: a list of instance_id strings, of the instances on which to execute the command
:return: the response from the send_command function (check the boto3 docs for ssm client.send_command() )
"""
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
# Example use:
ssm_client = boto3.client('ssm') # Need your credentials here
commands = ['echo "hello world"']
instance_ids = ['an_instance_id_string']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)
For windows instance powershell commands you'd use an alternative option:
DocumentName="AWS-RunPowerShellScript",
You can use the following code snippet to ssh to an EC2 instance and run some command from boto3.
import boto3
import botocore
import paramiko
key = paramiko.RSAKey.from_private_key_file(path/to/mykey.pem)
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Connect/ssh to an instance
try:
# Here 'ubuntu' is user name and 'instance_ip' is public IP of EC2
client.connect(hostname=instance_ip, username="ubuntu", pkey=key)
# Execute a command(cmd) after connecting/ssh to an instance
stdin, stdout, stderr = client.exec_command(cmd)
print stdout.read()
# close the client connection once the job is done
client.close()
break
except Exception, e:
print e
Here is how I have done
import boto3
import botocore
import boto
import paramiko
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
i = 0
for instance in instances:
print(instance.id, instance.instance_type)
i+= 1
x = int(input("Enter your choice: "))
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('address to .pem key')
ssh.connect(instance.public_dns_name,username='ec2-user',pkey=privkey)
stdin, stdout, stderr = ssh.exec_command('python input_x.py')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
x = line.decode()
#print(line.decode())
print(x,i)
ssh.close()
For the credentails, I have added AWSCLI package, then in the terminal run
aws configure
enter the credentials. All of them will be saved in .aws folder, u can change the path too.
You can also use kitten python library for that which is just a wrapper around boto3. You can also run same command on multiple servers at the same time using this utility.
For Example.
kitten run uptime ubuntu 18.105.107.20
You don't SSH from python. You can use boto3 module to interact with the EC2 instance.
Here you have a complete documentation of boto3 and what commands you can run with it.
Boto provided a way to SSH into EC2 instances programmatically using Paramiko and then run commands. Boto3 does not include this functionality. You could probably modify the boto code to work with boto3 without a huge amount of effort. Or you could look into using something like fabric or ansible which provide a much more powerful way to remotely execute commands on EC2 instances.
use boto3 to discover instances and fabric to run commands on the instances
I am trying to connect to eDirectory using python. It is not as easy as connecting to active directory using python so I am wondering if this is even possible. I am currently running python3.4
I'm the author of ldap3, I use eDirectory for testing the library.
just try the following code:
from ldap3 import Server, Connection, ALL, SUBTREE
server = Server('your_server_name', get_info=ALL) # don't user get_info if you don't need info on the server and the schema
connection = Connection(server, 'your_user_name_dn', 'your_password')
connection.bind()
if connection.search('your_search_base','(objectClass=*)', SUBTREE, attributes = ['cn', 'objectClass', 'your_attribute'])
for entry in connection.entries:
print(entry.entry_get_dn())
print(entry.cn, entry.objectClass, entry.your_attribute)
connection.unbind()
If you need a secure connection just change the server definition to:
server = Server('your_server_name', get_info=ALL, use_tls=True) # default tls configuration on port 636
Also, any example in the docs at https://ldap3.readthedocs.org/en/latest/quicktour.html should work with eDirectory.
Bye,
Giovanni
I have a flask app hosted on Heroku that needs to run commands on an AWS EC2 instance (Amazon Linux AMI) using boto.cmdshell. A couple of questions:
Is using a key pair to access the EC2 instance the best practice? Or is using username/password better?
If using a key pair is the preferred method, what's the best practice on managing/storing private keys on Heroku? Obviously putting the private key in git is not an option.
Thanks.
Heroku lets you take advantage of config variables to manage your application. Here is an exmaple of my config.py file that lives inside my flask application:
import os
# flask
PORT = int(os.getenv("PORT", 5000))
basedir = str(os.path.abspath(os.path.dirname(__file__)))
SECRET_KEY = str(os.getenv("APP_SECRET_KEY"))
DEBUG = str(os.getenv("DEBUG"))
ALLOWED_EXTENSIONS = str(os.getenv("ALLOWED_EXTENSIONS"))
TESTING = os.getenv("TESTING", False)
# s3
AWS_ACCESS_KEY_ID = str(os.getenv("AWS_ACCESS_KEY_ID"))
AWS_SECRET_ACCESS_KEY = str(os.getenv("AWS_SECRET_ACCESS_KEY"))
S3_BUCKET = str(os.getenv("S3_BUCKET"))
S3_UPLOAD_DIRECTORY = str(os.getenv("S3_UPLOAD_DIRECTORY"))
Now i can have two different sets of results. It pulls from my Environment variables. One when my application is on my local computer and from Heroku config variables when in production. For example.
DEBUG = str(os.getenv("DEBUG"))
is "TRUE" on my local computer. But False on Heroku. In order to check your Heroku config run.
Heroku config
Also keep in mind that if you ever want to keep some files part of your project locally but not in heroku or on github you can use git ignore. Of course those files won't exist on your production application then.
What I was looking for was guidance on how to deal with private keys. Both #DrewV and #yfeldblum pointed me to the right direction. I ended up turning my private key into a string and storing it in a Heroku config variables.
If anyone is looking to do something similar, here's a sample code snippit using paramiko:
import paramiko, base64
import StringIO
import os
key = paramiko.RSAKey.from_private_key(StringIO.StringIO(str(os.environ.get("AWS_PRIVATE_KEY"))))
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(str(os.environ.get("EC2_PUBLIC_DNS")), username='ec2-user', pkey=key)
stdin, stdout, stderr = ssh.exec_command('ps')
for line in stdout:
print '... ' + line.strip('\n')
ssh.close()
Thanks to #DrewV and #yfeldblum for helping (upvote for both).
You can use config vars to store config items in an application running on Heroku.
You can use a username/password combination. You may make the username something easy; but be sure to generate a strong password, e.g., with something like openssl rand -base64 32.
I am trying to create a fabfile.py so that I can deploy on EC2. I have the following in my fabfile.py:
from __future__ import with_statement
from fabric.api import *
def ec2():
env.hosts = ['111.111.111.111']
env.user = 'ubuntu'
env.key_filename = '/path/to/my/pem/key.pem'
def run_ls():
run('ls -alt')
'111.111.111.111' is the elastic ip of my instance, and i alway login with ubuntu, not root.
when i run the following command
fab ec2 run_ls
i see the following output:
[111.111.111.111] Executing task 'run_ls'
[111.111.111.111] run: ls -alt
Fatal error: Host key for 111.111.111.111 did not match pre-existing key! Server's key was changed recently, or possible man-in-the-middle attack.
Aborting.
Not sure what is going on, but I can't find to seem any great tutorials on using fabric on ec2, and I do not know how that is possible.
Thanks
Update:
Looks like
env.hosts = ['111.111.111.111']
is not valid, you need to use the actually URL
env.hosts = ['mywebsite.com']
which fixed my issue
You can also use the '--disable-known-hosts' switch to ignore this error.
Make sure your elastic IP is attached to the instance. I think the key_filename takes a single argument but mine is working when you pass in an array instead:
env.user = "ubuntu"
env.key_filename = ["my_key.pem",]
Maybe you should try using the public hostname of your instance like:
env.roledefs.update({
'prod': ['ec2-52-14-72-225.us-west-1.compute.amazonaws.com'],
})
From a Vagrant issue on GitHub, you may need to remove the host from the known_hosts file using a command like this:
ssh-keygen -R 111.111.111.111