I am using an Ubuntu 14.04 server instance on AWS and I am having trouble moving a python script that works in the us-east-1 region to the Frankfurt eu-central-1 region. The script is to get the ip-addressess of all of the instances in a particular autoscaling group:
def get_autoscale_instance_ips(region, group):
conn = boto.connect_ec2(region)
as_conn = boto.connect_autoscale()
try:
group = as_conn.get_all_groups(names=[group])[0]
instances_ids = [i.instance_id for i in group.instances]
reservations = conn.get_all_reservations(instances_ids)
instances = [i for r in reservations for i in r.instances]
ip_addresses = [i.private_ip_address for i in instances]
According to the Boto documentation I need to change the autoscale endpoint from the default US region, by putting it in a boto config file either at /etc/boto.cfg for site-wide settings or ~/.boto for user-specific settings.
My config file just contains the following:
[Boto]
autoscale_endpoint = autoscaling.eu-central-1.amazonaws.com
autoscale_region_name = eu-central-1
I have tried both of these options and in both cases the boto config file seems to ignored.
What am I doing wrong, I'm sure I've missed something obvious?
Related
I have seen examples for checking whether an S3 bucket exists and have implemented them below. My bucket is located in us-east-1 region but the following code doesn't throw an exception. Is there a way to make the check region specific depending on my session?
session = boto3.Session(
profile_name = 'TEST'
,region_name='ap-south-1'
)
s3 = session.resource('s3')
bucket_name = 'TEST_BUCKET'
try:
s3.meta.client.head_bucket(Bucket = bucket_name)
except ClientError as c:
print(c)
It does not matter which S3 regional endpoint you send the request to. The underlying SDK (boto3) will redirect as needed. It's preferable, however, to target the correct region if you know it in advance, to save on redirects.
You can see this in detail if you use the awscli in debug mode:
aws s3api head-bucket --bucket mybucket --region ap-south-1 --debug
You will see debug output similar to this:
DEBUG - S3 client configured for region ap-south-1 but the bucket mybucket is in region us-east-1; Please configure the proper region to avoid multiple unnecessary redirects and signing attempts.
DEBUG - Switching signature version for service s3 to version s3v4 based on config file override.
DEBUG - Updating URI from https://s3.ap-south-1.amazonaws.com/mybucket to https://s3.us-east-1.amazonaws.com/mybucket
Note that the awscli uses the boto3 SDK, as does your Python script.
I am writing a python program using boto3 that grabs all of the queries made by a master account and pushes them out to all of the master account's sub accounts.
Grabbing the query IDs from the master instance is done, but I'm having trouble pushing them out to the sub accounts. With my authentication information AWS is connecting to the master account by default, but I can't figure out how to get it to connect to a sub account. Generally AWS services do this by switching roles, but Athena doesn't have a built in method for this. I could manually create different profiles but I'm not sure how to switch them manually in the middle of code execution
Here's Amazon's code example for switching in STS, which does support assuming different roles https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html
Here's what my program looks like so far
#!/usr/bin/env python3
import boto3
dev = boto3.session.Session(profile_name='dev')
#Function for executing athena queries
client = dev.client('athena')
s3_input = 's3://dev/test/'
s3_output = 's3://dev/testOutput'
database = 'ex_athena_db'
table = 'test_data'
response = client.list_named_queries(
MaxResults=50,
WorkGroup='primary'
)
print response
So I have the "dev" profile, but I'm not sure how to differentiate this profile to indicate to AWS that I'd like to access one of the child accounts. Is it just the name, or do I need some other paramter? I don't think I can (or need to) generate a seperate authentication token for this
I solved this by creating a new user profile for the sub account with a new ARN
sample config
[default]
region = us-east-1
[profile ecr-dev]
role_arn = arn:aws:iam::76532435:role/AccountRole
source_profile = default
sample code
#!/usr/bin/env python3
import boto3
dev = boto3.session.Session(profile_name='name', region_name="us-east-1")
#Function for executing athena queries
client = dev.client('athena')
s3_input = 's3:/test/'
s3_output = 's3:/test'
database = 'ex_athena_db'
response = client.list_named_queries(
MaxResults=50,
WorkGroup='primary'
)
print response
I'm new to Python and Boto, I've managed to sort out file uploads from my server to S3.
But once I've uploaded a new file I want to do an invalidation request.
I've got the code to do that:
import boto
print 'Connecting to CloudFront'
cf = boto.connect_cloudfront()
cf.create_invalidation_request(aws_distribution_id, ['/testkey'])
But I'm getting an error: NameError: name 'aws_distribution_id' is not defined
I guessed that I could add the distribution id to the ~/.boto config, like the aws_secret_access_key etc:
$ cat ~/.boto
[Credentials]
aws_access_key_id = ACCESS-KEY-ID-GOES-HERE
aws_secret_access_key = ACCESS-KEY-SECRET-GOES-HERE
aws_distribution_id = DISTRIBUTION-ID-GOES-HERE
But that's not actually listed in the docs, so I'm not too surprised it failed:
http://docs.pythonboto.org/en/latest/boto_config_tut.html
My problem is I don't want to add the distribution_id to the script as I run it on both my live and staging servers, and I have different S3 and CloudFront set ups for both.
So I need the distribution_id to change per server, which is how I've got the the AWS access keys set.
Can I add something else to the boto config or is there a python user defaults I could add it to?
Since you can have multiple cloudfront distributions per account, it wouldn't make sense to configure it in .boto.
You could have another config file specific to your own environment and run your invalidation script using the config file as argument (or have the same file, but with different data depending on your env).
I solved this by using the ConfigParser. I added the following to the top of my script:
import ConfigParser
# read conf
config = ConfigParser.ConfigParser()
config.read('~/my-app.cnf')
distribution_id = config.get('aws_cloudfront', 'distribution_id')
And inside the conf file at ~/.my-app.cnf
[aws_cloudfront]
distribution_id = DISTRIBUTION_ID
So on my live server I just need to drop the cnf file into the user's home dir and change the distribution_id
I'm trying to programmatically spin up an Azure VM using the Python REST API wrapper. All I want is a simple VM, not part of a deployment or anything like that. I've followed the example here: http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/#CreateVM
I've gotten the code to run, but I am not seeing any new VM in the portal; all it does is create a new cloud service that says "You have nothing deployed to the production environment." What am I doing wrong?
You've created a hosted_service (cloud service) but haven't put deployed anything in that service. You need to do a few more things so I'll coninue from where you left off, where name is the name of the VM:
# Where should the OS VHD be created:
media_link = 'http://portalvhdsexample.blob.core.windows.net/vhds/%s.vhd' % name
# Linux username/password details:
linux_config = azure.servicemanagement.LinuxConfigurationSet(name, 'username', 'password', True)
# Endpoint (port) configuration example, since documentation on this is lacking:
endpoint_config = azure.servicemanagement.ConfigurationSet()
endpoint_config.configuration_set_type = 'NetworkConfiguration'
endpoint1 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='HTTP', protocol='tcp', port='80', local_port='80', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint2 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='SSH', protocol='tcp', port='22', local_port='22', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint_config.input_endpoints.input_endpoints.append(endpoint1)
endpoint_config.input_endpoints.input_endpoints.append(endpoint2)
# Set the OS HD up for the API:
os_hd = azure.servicemanagement.OSVirtualHardDisk(image_name, media_link)
# Actually create the machine and start it up:
try:
sms.create_virtual_machine_deployment(service_name=name, deployment_name=name,
deployment_slot='production', label=name, role_name=name,
system_config=linux_config, network_config=endpoint_config,
os_virtual_hard_disk=os_hd, role_size='Small')
except Exception as e:
logging.error('AZURE ERROR: %s' % str(e))
return False
return True
Maybe I'm not understanding your problem, but a VM is essentially a deployment within a cloud service (think of it like a logical container for machines).
I have a flask app hosted on Heroku that needs to run commands on an AWS EC2 instance (Amazon Linux AMI) using boto.cmdshell. A couple of questions:
Is using a key pair to access the EC2 instance the best practice? Or is using username/password better?
If using a key pair is the preferred method, what's the best practice on managing/storing private keys on Heroku? Obviously putting the private key in git is not an option.
Thanks.
Heroku lets you take advantage of config variables to manage your application. Here is an exmaple of my config.py file that lives inside my flask application:
import os
# flask
PORT = int(os.getenv("PORT", 5000))
basedir = str(os.path.abspath(os.path.dirname(__file__)))
SECRET_KEY = str(os.getenv("APP_SECRET_KEY"))
DEBUG = str(os.getenv("DEBUG"))
ALLOWED_EXTENSIONS = str(os.getenv("ALLOWED_EXTENSIONS"))
TESTING = os.getenv("TESTING", False)
# s3
AWS_ACCESS_KEY_ID = str(os.getenv("AWS_ACCESS_KEY_ID"))
AWS_SECRET_ACCESS_KEY = str(os.getenv("AWS_SECRET_ACCESS_KEY"))
S3_BUCKET = str(os.getenv("S3_BUCKET"))
S3_UPLOAD_DIRECTORY = str(os.getenv("S3_UPLOAD_DIRECTORY"))
Now i can have two different sets of results. It pulls from my Environment variables. One when my application is on my local computer and from Heroku config variables when in production. For example.
DEBUG = str(os.getenv("DEBUG"))
is "TRUE" on my local computer. But False on Heroku. In order to check your Heroku config run.
Heroku config
Also keep in mind that if you ever want to keep some files part of your project locally but not in heroku or on github you can use git ignore. Of course those files won't exist on your production application then.
What I was looking for was guidance on how to deal with private keys. Both #DrewV and #yfeldblum pointed me to the right direction. I ended up turning my private key into a string and storing it in a Heroku config variables.
If anyone is looking to do something similar, here's a sample code snippit using paramiko:
import paramiko, base64
import StringIO
import os
key = paramiko.RSAKey.from_private_key(StringIO.StringIO(str(os.environ.get("AWS_PRIVATE_KEY"))))
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(str(os.environ.get("EC2_PUBLIC_DNS")), username='ec2-user', pkey=key)
stdin, stdout, stderr = ssh.exec_command('ps')
for line in stdout:
print '... ' + line.strip('\n')
ssh.close()
Thanks to #DrewV and #yfeldblum for helping (upvote for both).
You can use config vars to store config items in an application running on Heroku.
You can use a username/password combination. You may make the username something easy; but be sure to generate a strong password, e.g., with something like openssl rand -base64 32.