I want to be able to ssh into an EC2 instance, and run some shell commands in it, like this.
How do I do it in boto3?
This thread is a bit old, but since I've spent a frustrating afternoon discovering a simple solution, I might as well share it.
NB This is not a strict answer to the OP's question, as it doesn't use ssh. But, one point of boto3 is that you don't have to - so I think in most circumstances this would be the preferred way of achieving the OP's goal, as s/he can use his/her existing boto3 configuration trivially.
AWS' Run Command is built into botocore (so this should apply to both boto and boto3, as far as I know) but disclaimer: I've only tested this with boto3.
def execute_commands_on_linux_instances(client, commands, instance_ids):
"""Runs commands on remote linux instances
:param client: a boto/boto3 ssm client
:param commands: a list of strings, each one a command to execute on the instances
:param instance_ids: a list of instance_id strings, of the instances on which to execute the command
:return: the response from the send_command function (check the boto3 docs for ssm client.send_command() )
"""
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
# Example use:
ssm_client = boto3.client('ssm') # Need your credentials here
commands = ['echo "hello world"']
instance_ids = ['an_instance_id_string']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)
For windows instance powershell commands you'd use an alternative option:
DocumentName="AWS-RunPowerShellScript",
You can use the following code snippet to ssh to an EC2 instance and run some command from boto3.
import boto3
import botocore
import paramiko
key = paramiko.RSAKey.from_private_key_file(path/to/mykey.pem)
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Connect/ssh to an instance
try:
# Here 'ubuntu' is user name and 'instance_ip' is public IP of EC2
client.connect(hostname=instance_ip, username="ubuntu", pkey=key)
# Execute a command(cmd) after connecting/ssh to an instance
stdin, stdout, stderr = client.exec_command(cmd)
print stdout.read()
# close the client connection once the job is done
client.close()
break
except Exception, e:
print e
Here is how I have done
import boto3
import botocore
import boto
import paramiko
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
i = 0
for instance in instances:
print(instance.id, instance.instance_type)
i+= 1
x = int(input("Enter your choice: "))
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('address to .pem key')
ssh.connect(instance.public_dns_name,username='ec2-user',pkey=privkey)
stdin, stdout, stderr = ssh.exec_command('python input_x.py')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
x = line.decode()
#print(line.decode())
print(x,i)
ssh.close()
For the credentails, I have added AWSCLI package, then in the terminal run
aws configure
enter the credentials. All of them will be saved in .aws folder, u can change the path too.
You can also use kitten python library for that which is just a wrapper around boto3. You can also run same command on multiple servers at the same time using this utility.
For Example.
kitten run uptime ubuntu 18.105.107.20
You don't SSH from python. You can use boto3 module to interact with the EC2 instance.
Here you have a complete documentation of boto3 and what commands you can run with it.
Boto provided a way to SSH into EC2 instances programmatically using Paramiko and then run commands. Boto3 does not include this functionality. You could probably modify the boto code to work with boto3 without a huge amount of effort. Or you could look into using something like fabric or ansible which provide a much more powerful way to remotely execute commands on EC2 instances.
use boto3 to discover instances and fabric to run commands on the instances
Related
First off, I'm pretty new to AWS and it took me a lot of trial and error to get my lambda function to execute my python script which sit on an ec2 instance.
If I run my code manually through command line in my ec2 instance, the code works perfectly, it call the requested api and saves down the data.
If I call my script through a lambda function using ssh, it stops executing at the api call, the lamda returns that everything ran, but it didn't, I get no output messages returned to say there was an exception, nothing in the cloudwatch log either. I know it starts to execute my code, because if I put print statments before the api calls, I see them returned in the cloudwatch log.
Any ideas to help out a noob.
Here is my lambda code:
import time
import boto3
import json
import paramiko
def lambda_handler(event, context):
ec2 = boto3.resource('ec2', region_name='eu-west-2')
instance_id = 'removed_id'
instance = ec2.Instance(instance_id)
# Start the instance
instance.start()
s3_client = boto3.client('s3')
# Download private key file from secure S3 bucket
# and save it inside /tmp/ folder of lambda event
s3_client.download_file('removed_bucket', 'SSEC2.pem',
'/tmp/SSEC2.pem')
# Allowing few seconds for the download to complete
time.sleep(2)
# Giving some time to start the instance completely
time.sleep(60)
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/tmp/SSEC2.pem')
# username is most likely 'ec2-user' or 'root' or 'ubuntu'
# depending upon your ec2 AMI
ssh.connect(
instance.public_dns_name, username='ec2-user', pkey=privkey
)
print('Executing')
stdin, stdout, stderr = ssh.exec_command(
'/home/ec2-user/miniconda3/bin/python /home/ec2-user/api-calls/main.py')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
print(line)
ssh.close()
# Stop the instance
# instance.stop()
return {
'statusCode': 200,
'body': json.dumps('Execution successful ' )
}
edit :
okay, slight update, it's not falling over on the api call, it's actually stopping when it tries to open a config file a write, which is stored in "config/config.json". Now obviously this works in the ec2 environment when I'm executing manually, so this must have something to do with enviroment variables in ec2 not being the same if the job is triggered from elsewhere?? here is the exact code :
#staticmethod
def get_config():
with open("config/config.json", "r") as read_file:
data = json.load(read_file)
return data
problem solved. I need to use the full path names when executing the code remotely.
with open("/home/ec2-user/api-calls/config/config.json", "r") as read_file :
'''
I am using python to send emails via an AWS Simple Email Service.
In attempt to have the best security possible I would like to
make a boto SES connection without exposing my access keys inside the code.
Right now I am establishing a connection like this
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id='<ACCESS_KEY>',
aws_secret_access_key='<SECRET_ACCESS_KEY>'
)
Is there a way to do this without exposing my access keys inside the script?
The simplest solution is to use environment variables you may retrieve in your Python code with os.environ.
export AWS_ACCESS_KEY_ID=<YOUR REAL ACCESS KEY>
export AWS_SECRET_ACCESS_KEY=<YOUR REAL SECRET KEY>
And in the Python code:
from os import environ as os_env
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os_env['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os_env['AWS_SECRET_ACCESS_KEY']'
)
To your EC2 instance attach an IAM role that has SES privileges, then you do not have to pass the credentials explicitly. Your script will get the credentials automatically from the metadata server.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console. Then your code will be like:
ses = boto.ses.connect_to_region('us-west-2')
Preferred method of authentication is to use boto3's ability to read your AWS credential file.
Configure your AWS CLI using the aws configure command.
Then, in your script you can use the Session call to get the credentials:
session = boto3.Session(profile_name='default')
Two options are to set an environment variable named ACCESS_KEY and another named SECRET_ACCESS_KEY, then in your code you would have:
import os
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=os.environ['ACCESS_KEY'],
aws_secret_access_key=os.environ['SECRET_ACCESS_KEY']
)
or use a json file:
import json
path_to_json = 'your/path/here.json'
with open(path_to_json, 'r') as f:
keys = json.load(f)
ses = boto.ses.connect_to_region(
'us-west-2',
aws_access_key_id=keys['ACCESS_KEY'],
aws_secret_access_key=keys['SECRET_ACCESS_KEY']
)
the json file would contain:
{'ACCESS_KEY':<ACCESS_KEY>, 'SECRET_ACCESS_KEY':<SECRET_ACCESS_KEY>}
I am trying to establish a SSH connection between a Windows PC and a Linux server(amazon ec2).
I decided to use Fabric API implemented using python.
I have Putty installed on the Windows PC.
My fabfile script looks like this:
import sys
from fabric.api import *
def testlive():
print 'Test live ...'
run("uptime")
env.use_ssh_config = False
env.host_string = "host.something.com"
env.user = "myuser"
env.keys_filename = "./private_openssh.key"
env.port = 22
env.gateway = "proxyhost:port"
testlive()
I am running Fabric in the same directory with the private key.
I am able to login on this machine using Putty.
The problem: I am constantly asked for Login password for specified user.
Based on other posts(here and here) I already tried:
pass as a list the key file to env.keys_filename
use username#host_string
use env.host instead of env.host_string
How to properly configure Fabric to deal with proxy server and ssh private key file ?
The following should work.
env.key_filename = "./private_openssh.key"
(notice the typo in your attempt)
Fabric's API is best avoided really, way too many bugs and issues (see issue tracker).
You can do what you want in Python with the following:
from __future__ import print_function
from pssh import ParallelSSHClient
from pssh.utils import load_private_key
client = ParallelSSHClient(['host.something.com'],
pkey=load_private_key('private_openssh.key'),
proxy_host='proxyhost',
proxy_port=<proxy port number>,
user='myuser',
proxy_user='myuser')
output = client.run_command('uname')
for line in output['host.something.com'].stdout:
print(line)
ParallelSSH is available from pip as parallel-ssh.
PuTTYgen is what you will use to generate your SSH key then upload the copied SSH key to your Cloud Management portal - See Joyant
You will have to generate and authenticate a private key, to do so, you need PuTTYgen to generate the SSH access using RSA key with password, key comment and conform the key passphrase, here is a step by step guide documentation SSH Access using RSA Key Authentication
My program is running inside a VMware virtual machine, my purpose is to get some information about the machine which this vm is hosted on.
I've already done some googling and find a library called pyVmomi.
But I still can't figure out how to get the information I want.
The samples are almost all about getting all vms or all hosts, and there is not obvious way I can adapt them for getting information about the current machine.
Assuming your VM (that is running this pyVmomi script) is running some version of Linux you could use something like dmidecode to find the UUID.
import subprocess
from pyVim import connect
proc = subprocess.Popen(["sudo dmidecode|grep UUID|awk '{print $2}'"], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
uuid = out[:-1]
SI = None
SI = connect.SmartConnect(host=ARGS.host,
user=ARGS.user,
pwd=ARGS.password,
port=ARGS.port)
VM = SI.content.searchIndex.FindByUuid(None, uuid,
True,
False)
HOST = VM.runtime.host
print "Host name: {}".format(HOST.name)
What this will do is execute a system command on the Linux box to find the UUID. VMWare uses the BIOS UUID as the default UUID so dmidecode should work here. Next it will connect to a given vSphere host (in this example I assume a vCenter, but an ESXi host should provide the same results here). Next it will search the inventory looking for a VM with a matching UUID. From there it calls the runtime.host method which will return the HostSystem for the VM. Please note due to clustering that host could change.
This should help, install pynetinfo and pass device to the function
#!/usr/bin/python
import netinfo
def get_route( interface ):
r = []
for routes in netinfo.get_routes():
if routes[ 'dev' ] == interface:
r.append( routes[ 'dest' ] )
return r
print get_route( 'wlan0' )
I am trying to create a fabfile.py so that I can deploy on EC2. I have the following in my fabfile.py:
from __future__ import with_statement
from fabric.api import *
def ec2():
env.hosts = ['111.111.111.111']
env.user = 'ubuntu'
env.key_filename = '/path/to/my/pem/key.pem'
def run_ls():
run('ls -alt')
'111.111.111.111' is the elastic ip of my instance, and i alway login with ubuntu, not root.
when i run the following command
fab ec2 run_ls
i see the following output:
[111.111.111.111] Executing task 'run_ls'
[111.111.111.111] run: ls -alt
Fatal error: Host key for 111.111.111.111 did not match pre-existing key! Server's key was changed recently, or possible man-in-the-middle attack.
Aborting.
Not sure what is going on, but I can't find to seem any great tutorials on using fabric on ec2, and I do not know how that is possible.
Thanks
Update:
Looks like
env.hosts = ['111.111.111.111']
is not valid, you need to use the actually URL
env.hosts = ['mywebsite.com']
which fixed my issue
You can also use the '--disable-known-hosts' switch to ignore this error.
Make sure your elastic IP is attached to the instance. I think the key_filename takes a single argument but mine is working when you pass in an array instead:
env.user = "ubuntu"
env.key_filename = ["my_key.pem",]
Maybe you should try using the public hostname of your instance like:
env.roledefs.update({
'prod': ['ec2-52-14-72-225.us-west-1.compute.amazonaws.com'],
})
From a Vagrant issue on GitHub, you may need to remove the host from the known_hosts file using a command like this:
ssh-keygen -R 111.111.111.111