Fabric2 CLI: gracefully switch SSH user - python

I am using Invoke/Fabric with boto3 to create an AWS instance and hand it over to an Ansible script. In order to do that, a few things have to be prepared on the remote machine before Ansible can take over, notably installing Python, create a user, and copy public SSH keys.
The AWS image comes with a particular user. I would like to use this user only to create my own user, copy public keys, and remove password login afterwards. While using the Fabric CLI the connection object is not created and cannot be modified within tasks.
What would be a good way to switch users (aka recreate a connection object between tasks) and run the following tasks with the user that I just created?
I might not go about it the right way (I am migrating from Fabric 1 where a switch of the env values has been sufficient). Here are a few strategies I am aware of, most of them remove some flexibility we have been relying on.
Create a custom AMI on which all preparations has been done already.
Create a local Connection object within a task for the user setup before falling back to the connection object provided by the Fabric CLI.
Deeper integrate AWS with Ansible (the problem is that we have users that might use Ansible after the instance is alive but don't have AWS privileges).
I guess this list includes also a best practice question.

The AWS image comes with a particular user. I would like to use this user
only to create my own user, copy public keys, and remove password login
afterwards. While using the Fabric CLI the connection object is not created
and cannot be modified within tasks.
I'm not sure this is accurate. I have switched users during the execution of a task just fine. You just have to make sure that all subsequent calls that need the updated env use the execute operation.
e.g.
def create_users():
run('some command')
def some_other_stuff():
run('whoami')
#task
def new_instance():
# provision instance using boto3
env.host = [ ip_address ]
env.user = 'ec2-user'
env.password = 'sesame'
execute(create_users)
env.user = 'some-other-user'
execute(some_other_stuff)

Related

Learning Python fast how can I protect some private connections from been exposed

Hi I'm new to the community and new to Python, experienced but rusty on other high level languages, so my question is simple.
I made a simple script to connect to a private ftp server, and retrieve daily information from it.
from ftplib import FTP
#Open ftp connection
#Connect to server to retrieve inventory
#Open ftp connection
def FTPconnection(file_name):
ftp = FTP('ftp.serveriuse.com')
ftp.login('mylogin', 'password')
#List the files in the current directory
print("Current File List:")
file = ftp.dir()
print(file)
# # #Get the latest csv file from server
# ftp.cwd("/pub")
gfile = open(file_name, "wb")
ftp.retrbinary('RETR '+ file_name, gfile.write)
gfile.close()
ftp.quit()
FTPconnection('test1.csv')
FTPconnection('test2.csv')
That's the whole script, it passes my credentials, and then calls the function FTPconnection on two different files I'm retrieving.
Then my other script that processes them has an import statement, as I tried to call this script as a module, what my import does it's just connect to the FTP server and fetch information.
import ftpconnect as ftpc
This is the on the other Python script, that does the processing.
It works but I want to improve it, so I need some guidance on best practices about how to do this, because in Spyder 4.1.5 I get an 'Module ftpconnect called but unused' warning ... so probably I am missing something here, I'm developing on MacOS using Anaconda and Python 3.8.5.
I'm trying to build an app, to automate some tasks, but I couldn't find anything about modules that guided me to better code, it simply says you have to import whatever .py file name you used and that will be considered a module ...
and my final question is how can you normally protect private information(ftp credentials) from being exposed? This has nothing to do to protect my code but the credentials.
There are a few options for storing passwords and other secrets that a Python program needs to use, particularly a program that needs to run in the background where it can't just ask the user to type in the password.
Problems to avoid:
Checking the password in to source control where other developers or even the public can see it.
Other users on the same server reading the password from a configuration file or source code.
Having the password in a source file where others can see it over your shoulder while you are editing it.
Option 1: SSH
This isn't always an option, but it's probably the best. Your private key is never transmitted over the network, SSH just runs mathematical calculations to prove that you have the right key.
In order to make it work, you need the following:
The database or whatever you are accessing needs to be accessible by SSH. Try searching for "SSH" plus whatever service you are accessing. For example, "ssh postgresql". If this isn't a feature on your database, move on to the next option.
Create an account to run the service that will make calls to the database, and generate an SSH key.
Either add the public key to the service you're going to call, or create a local account on that server, and install the public key there.
Option 2: Environment Variables
This one is the simplest, so it might be a good place to start. It's described well in the Twelve Factor App. The basic idea is that your source code just pulls the password or other secrets from environment variables, and then you configure those environment variables on each system where you run the program. It might also be a nice touch if you use default values that will work for most developers. You have to balance that against making your software "secure by default".
Here's an example that pulls the server, user name, and password from environment variables.
import os
server = os.getenv('MY_APP_DB_SERVER', 'localhost')
user = os.getenv('MY_APP_DB_USER', 'myapp')
password = os.getenv('MY_APP_DB_PASSWORD', '')
db_connect(server, user, password)
Look up how to set environment variables in your operating system, and consider running the service under its own account. That way you don't have sensitive data in environment variables when you run programs in your own account. When you do set up those environment variables, take extra care that other users can't read them. Check file permissions, for example. Of course any users with root permission will be able to read them, but that can't be helped. If you're using systemd, look at the service unit, and be careful to use EnvironmentFile instead of Environment for any secrets. Environment values can be viewed by any user with systemctl show.
Option 3: Configuration Files
This is very similar to the environment variables, but you read the secrets from a text file. I still find the environment variables more flexible for things like deployment tools and continuous integration servers. If you decide to use a configuration file, Python supports several formats in the standard library, like JSON, INI, netrc, and XML. You can also find external packages like PyYAML and TOML. Personally, I find JSON and YAML the simplest to use, and YAML allows comments.
Three things to consider with configuration files:
Where is the file? Maybe a default location like ~/.my_app, and a command-line option to use a different location.
Make sure other users can't read the file.
Obviously, don't commit the configuration file to source code. You might want to commit a template that users can copy to their home directory.
Option 4: Python Module
Some projects just put their secrets right into a Python module.
# settings.py
db_server = 'dbhost1'
db_user = 'my_app'
db_password = 'correcthorsebatterystaple'
Then import that module to get the values.
# my_app.py
from settings import db_server, db_user, db_password
db_connect(db_server, db_user, db_password)
One project that uses this technique is Django. Obviously, you shouldn't commit settings.py to source control, although you might want to commit a file called settings_template.py that users can copy and modify.
I see a few problems with this technique:
Developers might accidentally commit the file to source control. Adding it to .gitignore reduces that risk.
Some of your code is not under source control. If you're disciplined and only put strings and numbers in here, that won't be a problem. If you start writing logging filter classes in here, stop!
If your project already uses this technique, it's easy to transition to environment variables. Just move all the setting values to environment variables, and change the Python module to read from those environment variables.

Alternative to attempting to persist Environment Variables in Python

Up until now, whenever I have needed to store a "secret" for a simple python application, I have relied on environment variables. In Windows, I set the variables via the Computer Properties dialog and I access them in my Python code like this:
database_password = os.environ['DB_PASS']
The simplicity of this approach has served me well. Now I have a project that uses Oauth2 authentication and I have a need to store tokens to the environment that may change throughout program execution. I want them to persist the next time I execute the program. This is what I have come up with:
#fetch a new token
token = oauth.fetch_token('https://api.example.com/oauth/v2/token', code=secretcode)
access_token = token['access_token']
#make sure it persists in the current session
os.environ['TOKEN'] = access_token
#store to the system environment (Windows)
cmd = 'SETX /M TOKEN ' + access_token
os.system(cmd)
It gets the job done quickly for me today, but does not seem like the right approach to add to my toolbox. Does anyone have a more elegant way of doing what I am trying to do that does not add too many layers of complexity? If the solution worked across platforms that would be a bonus.
I have used the Python keyring module with great success. It's an interface to credential vaults provided by the operating system (e.g., Windows Credential Manager). I haven't used it on Linux, but it appears to be supported, as well.
Storing a password/token and then retrieving it can be as simple as:
import keyring
keyring.set_password("system", "username", "password")
keyring.get_password("system", "username")

Postgres: is set_config(). current_setting() a private/robust stack for application variables?

In my application I have triggers that need access to things like user id. I am storing that information with
set_config('PRIVATE.'|'user_id', '221', false)
then, while I am doing operations that modify the database, triggers may do:
user_id = current_setting('PRIVATE.user_id');
it seems to work great. My database actions are mostly from python, psycopg2, once I get a connection I'll do the set_config() as my first operation, then go about my database business. Is this practice a good one or could data leak from one session to another? I was doing this sort of thing with the SD and GD variables in plpython, but that language proved too heavy for what I was trying to do so I had to shift to plpgsql.
While it's not really what they're designed for, you can use GUCs as session variables.
They can also be transaction scoped, with SET LOCAL or the set_config equivalent.
So long as you don't allow the user to run arbitrary SQL they're a reasonable choice, and session-local GUCs aren't shared with other sessions. They're not designed for secure session-local storage but they're handy places to stash things like an application's "current user" if you're not using SET ROLE or SET SESSION AUTHORIZATION for that.
Do be aware that the user can define them via environment variables if you let them run a libpq based client, e.g.
$ PGOPTIONS="-c myapp.user_id=fred" psql -c "SHOW myapp.user_id;"
myapp.user_id
---------------
fred
(1 row)
Also, on older PostgreSQL versions you had to declare the namespace in postgresql.conf before you could use it.

Nested Fabric Connections

The scenario is our production servers are sitting in a private subnet with a NAT instance in front of them to allow maintenance via SSH. Currently we connect to the NAT instance via SSH then via SSH from there to the respective server.
What I would like to do is run deployment tasks from my machine using the NAT as a proxy without uploading the codebase to the NAT instance. Is this possible with Fabric or am I just going to end up in a world of pain?
EDIT
Just to follow up on this, as #Morgan suggested, the gateway option will indeed fix this issue.
For a bit of completeness, in my fabfile.py:
def setup_connections():
"""
This should be called as the first task in all calls in order to setup the correct connections
e.g. fab setup_connections task1 task2...
"""
env.roledefs = {}
env.gateway = 'ec2-user#X.X.X.X' # where all the magic happens
tag_mgr = EC2TagManager(...)
for role in ['web', 'worker']:
env.roledefs[role] = ['ubuntu#%s' % ins for ins in
tag_mgr.get_instances(instance_attr='private_ip_address', role=role)]
env.key_filename = '/path/to/server.pem'
#roles('web')
def test_uname_web():
run('uname -a')
I can now run fab setup_connections test_uname_web and I can get the uname of my web server
So if you have a newer version of Fabric (1.5+) you can try using the gateway options. I've never used it myself, but seems like what you'd want.
Documentation here:
env var
cli flag
execution model notes
original ticket
Also if you run into any issues, all of us tend to idle in irc.

Using fabric to manipulate database connection

I just started using Fabric to better control the specific settings for test and deployment environments, and I'm trying to get an idea of the best approach to swapping configurations.
Let's say I have a module in my application that defines a simple database connection and some constants for authentication by default:
host = 'db.host.com'
user = 'someuser'
passw = 'somepass'
db = 'somedb'
class DB():
def __init__(self,host=host,user=user,passw=passw,db=db,cursor='DictCursor'):
#make a database connection here and all that jazz
Before I found fabric, I would use the getfqdn() function from the socket library to check the domain name of the host the system was being pushed to and then conditionalize the authentication credentials.
if getfqdn() == 'test.somedomain.com':
host = 'db.host.com'
user = 'someuser'
passw = 'somepass'
db = 'somedb'
elif getfqdn() == 'test.someotherdomain.com':
host = 'db.other.com'
user = 'otherguy'
passw = 'otherpass'
db = 'somedb'
This, for obvious reasons, is really not that great. What I would like to know is what's the smartest way of adapting something like this in Fabric, so that when the project gets pushed to a certain test/deployment server, these values are changed at post-push.
I can think of a few approaches just from looking through the docs. Should I have a file that just defines the constants that Fabric could output to using the shell commands based off what the deployment was, and then the file defining the database handler could import them? Does it makes sense to run open and write from within the fabfile like this? I assumed I'd also have to .gitignore these kinds of files so they don't get committed into the repo and just rely on Fabric to deploy them.
I plan on adapting whatever approach is the best suggested to all the configuration settings that I currently either swap using getfqdn or adjust manually. Thanks!
You can do all of that off of the env.host and then use something like the contrib template function to render the conf file and push it up. But templates are best in these instances (see: puppet and other config managers as well)

Categories