I have a flask app hosted on Heroku that needs to run commands on an AWS EC2 instance (Amazon Linux AMI) using boto.cmdshell. A couple of questions:
Is using a key pair to access the EC2 instance the best practice? Or is using username/password better?
If using a key pair is the preferred method, what's the best practice on managing/storing private keys on Heroku? Obviously putting the private key in git is not an option.
Thanks.
Heroku lets you take advantage of config variables to manage your application. Here is an exmaple of my config.py file that lives inside my flask application:
import os
# flask
PORT = int(os.getenv("PORT", 5000))
basedir = str(os.path.abspath(os.path.dirname(__file__)))
SECRET_KEY = str(os.getenv("APP_SECRET_KEY"))
DEBUG = str(os.getenv("DEBUG"))
ALLOWED_EXTENSIONS = str(os.getenv("ALLOWED_EXTENSIONS"))
TESTING = os.getenv("TESTING", False)
# s3
AWS_ACCESS_KEY_ID = str(os.getenv("AWS_ACCESS_KEY_ID"))
AWS_SECRET_ACCESS_KEY = str(os.getenv("AWS_SECRET_ACCESS_KEY"))
S3_BUCKET = str(os.getenv("S3_BUCKET"))
S3_UPLOAD_DIRECTORY = str(os.getenv("S3_UPLOAD_DIRECTORY"))
Now i can have two different sets of results. It pulls from my Environment variables. One when my application is on my local computer and from Heroku config variables when in production. For example.
DEBUG = str(os.getenv("DEBUG"))
is "TRUE" on my local computer. But False on Heroku. In order to check your Heroku config run.
Heroku config
Also keep in mind that if you ever want to keep some files part of your project locally but not in heroku or on github you can use git ignore. Of course those files won't exist on your production application then.
What I was looking for was guidance on how to deal with private keys. Both #DrewV and #yfeldblum pointed me to the right direction. I ended up turning my private key into a string and storing it in a Heroku config variables.
If anyone is looking to do something similar, here's a sample code snippit using paramiko:
import paramiko, base64
import StringIO
import os
key = paramiko.RSAKey.from_private_key(StringIO.StringIO(str(os.environ.get("AWS_PRIVATE_KEY"))))
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(str(os.environ.get("EC2_PUBLIC_DNS")), username='ec2-user', pkey=key)
stdin, stdout, stderr = ssh.exec_command('ps')
for line in stdout:
print '... ' + line.strip('\n')
ssh.close()
Thanks to #DrewV and #yfeldblum for helping (upvote for both).
You can use config vars to store config items in an application running on Heroku.
You can use a username/password combination. You may make the username something easy; but be sure to generate a strong password, e.g., with something like openssl rand -base64 32.
Related
I have a flask application and I use a config file with some sensitive information. I was wondering how to deploy my application with the config file without releasing the sensitive information it holds.
TLDR; Create a class to hold your config secrets, store the actual secrets in environment variables on your host machine, and read in the environment variables in your app.
Detailed implementation below.
This is my folder structure:
api
|_cofig
|_config.py
|_app.py
Then inside of my app.py, which actually starts my Flask application, it looks roughly like this (I've excluded everything that doesn't matter).
from config.config import config
def create_app(app_environment=None):
if app_environment is None:
app = Flask(__name__)
app.config.from_object(config[os.getenv('FLASK_ENV', 'dev')])
else:
app = Flask(__name__)
app.config.from_object(config[app_environment])
if __name__ == "__main__":
app = create_app(os.getenv('FLASK_ENV', 'dev'))
app.run()
This allows you to dynamically specify an app environment. For example, you can pass the app environment by setting an environment variable and reading it in before you call create_app(). This is extremely useful if you containerize your Flask app using Docker or some other virtualization tool.
Lastly, my config.py file looks like this. You would change the attributes in each of my environment configs to your secrets.
import os
class ProdConfig:
# Database configuration
API_TOKEN = os.environ.get('PROD_MARKET_STACK_API_KEY_SECRET')
class DevConfig:
# Database configuration
API_TOKEN = os.environ.get('API_KEY_SECRET')
class TestConfig:
# Database configuration
API_TOKEN = os.environ.get('MARKET_STACK_API_KEY')
config = {
'dev': DevConfig,
'test': TestConfig,
'prod': ProdConfig
}
Further, you would access your config secrets throughout any modules in your Flask application via...
from flask import current_app
current_app.config['API_TOKEN']`
I believe the answer to your question may be more related to where your application is being deployed, rather than which web-framework you are using.
As far as I understand, it's a bad practice to store/track sensitive information (passwords and API keys for example) on your source files and you should probably avoid that.
If you have already commited that sensitive data and you want to remove it completely from your git history, I recommend checking this GitHub page.
A couple of high level solutions could be:
Have you config file access environment variables instead of hard coded values.
If you are using a cloud service such as Google Cloud Platform or AWS, you could use a secret manager to store your data and fetch it safely from your app.
Another approach could be storing the information encrypted (maybe with something like KMS), and decrypt it when needed (my least favorite).
I have deployed my flask web app api on azure. I have lot of config files for that I have created a separate directory where I keep all my config files. This is how my project directory looks like
configs
-> app_config.json
-> client_config.json
logs
-> app_debug.log
-> app_error.log
data
-> some other data related files
app.py
app.py is my main python file from which I have imported all the config files and below is how I use it
config_file = os.path.join(os.path.dirname(__file__), 'configs', 'app_config.json')
# Get the config data from config json file
json_data = open(config_file)
config_data = json.load(json_data)
json_data.close()
After this I can easily use config_data anywhere in the code:
mongo_db = connect_mongodb(username=config_data['MongoUsername'], password=config_data['MongoPassword'], url=config_data['MongoDBURL'], port=config_data['Port'], authdbname=config_data['AuthDBName'])
I simply need an efficient way to debug GAE application, and to do so I need to connect to the production GAE infrastructure from the localhost when running dev_appserver.py.
Next code work well if I run it as a separate script:
import argparse
try:
import dev_appserver
dev_appserver.fix_sys_path()
except ImportError:
print('Please make sure the App Engine SDK is in your PYTHONPATH.')
raise
from google.appengine.ext import ndb
from google.appengine.ext.remote_api import remote_api_stub
def main(project_id):
server_name = '{}.appspot.com'.format(project_id)
remote_api_stub.ConfigureRemoteApiForOAuth(
app_id='s~' + project_id,
path='/_ah/remote_api',
servername=server_name)
# List the first 10 keys in the datastore.
keys = ndb.Query().fetch(10, keys_only=True)
for key in keys:
print(key)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('project_id', help='Your Project ID.')
args = parser.parse_args()
main(args.project_id)
With this script, I was able to get data from remote Datastore. But where is I need to put the same code in my application(which is obviously not a single script) to make it work?
I've tried to put remote_api_stub.ConfigureRemoteApiForOAuth() code in the appengine_config.py but I've got a recursion error.
I'm running app like this:
dev_appserver.py app.yaml --admin_port=8001 --enable_console --support_datastore_emulator=no --log_level=info
The application uses NDB to access Google Datastore.
Application contain many modules and files and I simply don't know where is to put remote_api_stab auth code.
I hope somebody from the google team will see this topic because I've searched all the internet without any results. That's unbelievable how many people developing apps for the GAE platform, but it looks like nobody is developing/debugging apps locally.
I am trying to establish a SSH connection between a Windows PC and a Linux server(amazon ec2).
I decided to use Fabric API implemented using python.
I have Putty installed on the Windows PC.
My fabfile script looks like this:
import sys
from fabric.api import *
def testlive():
print 'Test live ...'
run("uptime")
env.use_ssh_config = False
env.host_string = "host.something.com"
env.user = "myuser"
env.keys_filename = "./private_openssh.key"
env.port = 22
env.gateway = "proxyhost:port"
testlive()
I am running Fabric in the same directory with the private key.
I am able to login on this machine using Putty.
The problem: I am constantly asked for Login password for specified user.
Based on other posts(here and here) I already tried:
pass as a list the key file to env.keys_filename
use username#host_string
use env.host instead of env.host_string
How to properly configure Fabric to deal with proxy server and ssh private key file ?
The following should work.
env.key_filename = "./private_openssh.key"
(notice the typo in your attempt)
Fabric's API is best avoided really, way too many bugs and issues (see issue tracker).
You can do what you want in Python with the following:
from __future__ import print_function
from pssh import ParallelSSHClient
from pssh.utils import load_private_key
client = ParallelSSHClient(['host.something.com'],
pkey=load_private_key('private_openssh.key'),
proxy_host='proxyhost',
proxy_port=<proxy port number>,
user='myuser',
proxy_user='myuser')
output = client.run_command('uname')
for line in output['host.something.com'].stdout:
print(line)
ParallelSSH is available from pip as parallel-ssh.
PuTTYgen is what you will use to generate your SSH key then upload the copied SSH key to your Cloud Management portal - See Joyant
You will have to generate and authenticate a private key, to do so, you need PuTTYgen to generate the SSH access using RSA key with password, key comment and conform the key passphrase, here is a step by step guide documentation SSH Access using RSA Key Authentication
I'm new to Python and Boto, I've managed to sort out file uploads from my server to S3.
But once I've uploaded a new file I want to do an invalidation request.
I've got the code to do that:
import boto
print 'Connecting to CloudFront'
cf = boto.connect_cloudfront()
cf.create_invalidation_request(aws_distribution_id, ['/testkey'])
But I'm getting an error: NameError: name 'aws_distribution_id' is not defined
I guessed that I could add the distribution id to the ~/.boto config, like the aws_secret_access_key etc:
$ cat ~/.boto
[Credentials]
aws_access_key_id = ACCESS-KEY-ID-GOES-HERE
aws_secret_access_key = ACCESS-KEY-SECRET-GOES-HERE
aws_distribution_id = DISTRIBUTION-ID-GOES-HERE
But that's not actually listed in the docs, so I'm not too surprised it failed:
http://docs.pythonboto.org/en/latest/boto_config_tut.html
My problem is I don't want to add the distribution_id to the script as I run it on both my live and staging servers, and I have different S3 and CloudFront set ups for both.
So I need the distribution_id to change per server, which is how I've got the the AWS access keys set.
Can I add something else to the boto config or is there a python user defaults I could add it to?
Since you can have multiple cloudfront distributions per account, it wouldn't make sense to configure it in .boto.
You could have another config file specific to your own environment and run your invalidation script using the config file as argument (or have the same file, but with different data depending on your env).
I solved this by using the ConfigParser. I added the following to the top of my script:
import ConfigParser
# read conf
config = ConfigParser.ConfigParser()
config.read('~/my-app.cnf')
distribution_id = config.get('aws_cloudfront', 'distribution_id')
And inside the conf file at ~/.my-app.cnf
[aws_cloudfront]
distribution_id = DISTRIBUTION_ID
So on my live server I just need to drop the cnf file into the user's home dir and change the distribution_id
I am trying to create a fabfile.py so that I can deploy on EC2. I have the following in my fabfile.py:
from __future__ import with_statement
from fabric.api import *
def ec2():
env.hosts = ['111.111.111.111']
env.user = 'ubuntu'
env.key_filename = '/path/to/my/pem/key.pem'
def run_ls():
run('ls -alt')
'111.111.111.111' is the elastic ip of my instance, and i alway login with ubuntu, not root.
when i run the following command
fab ec2 run_ls
i see the following output:
[111.111.111.111] Executing task 'run_ls'
[111.111.111.111] run: ls -alt
Fatal error: Host key for 111.111.111.111 did not match pre-existing key! Server's key was changed recently, or possible man-in-the-middle attack.
Aborting.
Not sure what is going on, but I can't find to seem any great tutorials on using fabric on ec2, and I do not know how that is possible.
Thanks
Update:
Looks like
env.hosts = ['111.111.111.111']
is not valid, you need to use the actually URL
env.hosts = ['mywebsite.com']
which fixed my issue
You can also use the '--disable-known-hosts' switch to ignore this error.
Make sure your elastic IP is attached to the instance. I think the key_filename takes a single argument but mine is working when you pass in an array instead:
env.user = "ubuntu"
env.key_filename = ["my_key.pem",]
Maybe you should try using the public hostname of your instance like:
env.roledefs.update({
'prod': ['ec2-52-14-72-225.us-west-1.compute.amazonaws.com'],
})
From a Vagrant issue on GitHub, you may need to remove the host from the known_hosts file using a command like this:
ssh-keygen -R 111.111.111.111