I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets.
You're exactly right to want to encrypt your sensitive settings file while still maintaining the file in version control. As you mention, the best solution would be one in which Git will transparently encrypt certain sensitive files when you push them so that locally (i.e. on any machine which has your certificate) you can use the settings file, but Git or Dropbox or whoever is storing your files under VC does not have the ability to read the information in plaintext.
Tutorial on Transparent Encryption/Decryption during Push/Pull
This gist https://gist.github.com/873637 shows a tutorial on how to use the Git's smudge/clean filter driver with openssl to transparently encrypt pushed files. You just need to do some initial setup.
Summary of How it Works
You'll basically be creating a .gitencrypt folder containing 3 bash scripts,
clean_filter_openssl
smudge_filter_openssl
diff_filter_openssl
which are used by Git for decryption, encryption, and supporting Git diff. A master passphrase and salt (fixed!) is defined inside these scripts and you MUST ensure that .gitencrypt is never actually pushed.
Example clean_filter_openssl script:
#!/bin/bash
SALT_FIXED=<your-salt> # 24 or less hex characters
PASS_FIXED=<your-passphrase>
openssl enc -base64 -aes-256-ecb -S $SALT_FIXED -k $PASS_FIXED
Similar for smudge_filter_open_ssl and diff_filter_oepnssl. See Gist.
Your repo with sensitive information should have a .gitattribute file (unencrypted and included in repo) which references the .gitencrypt directory (which contains everything Git needs to encrypt/decrypt the project transparently) and which is present on your local machine.
.gitattribute contents:
* filter=openssl diff=openssl
[merge]
renormalize = true
Finally, you will also need to add the following content to your .git/config file
[filter "openssl"]
smudge = ~/.gitencrypt/smudge_filter_openssl
clean = ~/.gitencrypt/clean_filter_openssl
[diff "openssl"]
textconv = ~/.gitencrypt/diff_filter_openssl
Now, when you push the repository containing your sensitive information to a remote repository, the files will be transparently encrypted. When you pull from a local machine which has the .gitencrypt directory (containing your passphrase), the files will be transparently decrypted.
Notes
I should note that this tutorial does not describe a way to only encrypt your sensitive settings file. This will transparently encrypt the entire repository that is pushed to the remote VC host and decrypt the entire repository so it is entirely decrypted locally. To achieve the behavior you want, you could place sensitive files for one or many projects in one sensitive_settings_repo. You could investigate how this transparent encryption technique works with Git submodules http://git-scm.com/book/en/Git-Tools-Submodules if you really need the sensitive files to be in the same repository.
The use of a fixed passphrase could theoretically lead to brute-force vulnerabilities if attackers had access to many encrypted repos/files. IMO, the probability of this is very low. As a note at the bottom of this tutorial mentions, not using a fixed passphrase will result in local versions of a repo on different machines always showing that changes have occurred with 'git status'.
Heroku pushes the use of environment variables for settings and secret keys:
The traditional approach for handling such config vars is to put them under source - in a properties file of some sort. This is an error-prone process, and is especially complicated for open source apps which often have to maintain separate (and private) branches with app-specific configurations.
A better solution is to use environment variables, and keep the keys out of the code. On a traditional host or working locally you can set environment vars in your bashrc. On Heroku, you use config vars.
With Foreman and .env files Heroku provide an enviable toolchain to export, import and synchronise environment variables.
Personally, I believe it's wrong to save secret keys alongside code. It's fundamentally inconsistent with source control, because the keys are for services extrinsic to the the code. The one boon would be that a developer can clone HEAD and run the application without any setup. However, suppose a developer checks out a historic revision of the code. Their copy will include last year's database password, so the application will fail against today's database.
With the Heroku method above, a developer can checkout last year's app, configure it with today's keys, and run it successfully against today's database.
The cleanest way in my opinion is to use environment variables. You won't have to deal with .dist files for example, and the project state on the production environment would be the same as your local machine's.
I recommend reading The Twelve-Factor App's config chapter, the others too if you're interested.
I suggest using configuration files for that and to not version them.
You can however version examples of the files.
I don't see any problem of sharing development settings. By definition it should contain no valuable data.
An option would be to put project-bound credentials into an encrypted container (TrueCrypt or Keepass) and push it.
Update as answer from my comment below:
Interesting question btw. I just found this: github.com/shadowhand/git-encrypt which looks very promising for automatic encryption
Since asking this question I have settled on a solution, which I use when developing small application with a small team of people.
git-crypt
git-crypt uses GPG to transparently encrypt files when their names match certain patterns. For intance, if you add to your .gitattributes file...
*.secret.* filter=git-crypt diff=git-crypt
...then a file like config.secret.json will always be pushed to remote repos with encryption, but remain unencrypted on your local file system.
If I want to add a new GPG key (a person) to your repo which can decrypt the protected files then run git-crypt add-gpg-user <gpg_user_key>. This creates a new commit. The new user will be able to decrypt subsequent commits.
BlackBox was recently released by StackExchange and while I have yet to use it, it seems to exactly address the problems and support the features requested in this question.
From the description on https://github.com/StackExchange/blackbox:
Safely store secrets in a VCS repo (i.e. Git or Mercurial). These
commands make it easy for you to GPG encrypt specific files in a repo
so they are "encrypted at rest" in your repository. However, the
scripts make it easy to decrypt them when you need to view or edit
them, and decrypt them for for use in production.
I ask the question generally, but in my specific instance I would like
to store secret keys and passwords for a Django/Python site using git
and github.
No, just don't, even if it's your private repo and you never intend to share it, don't.
You should create a local_settings.py put it on VCS ignore and in your settings.py do something like
from local_settings import DATABASES, SECRET_KEY
DATABASES = DATABASES
SECRET_KEY = SECRET_KEY
If your secrets settings are that versatile, I am eager to say you're doing something wrong
EDIT: I assume you want to keep track of your previous passwords versions - say, for a script that would prevent password reusing etc.
I think GnuPG is the best way to go - it's already used in one git-related project (git-annex) to encrypt repository contents stored on cloud services. GnuPG (gnu pgp) provides a very strong key-based encryption.
You keep a key on your local machine.
You add 'mypassword' to ignored files.
On pre-commit hook you encrypt the mypassword file into the mypassword.gpg file tracked by git and add it to the commit.
On post-merge hook you just decrypt mypassword.gpg into mypassword.
Now if your 'mypassword' file did not change then encrypting it will result with same ciphertext and it won't be added to the index (no redundancy). Slightest modification of mypassword results in radically different ciphertext and mypassword.gpg in staging area differs a lot from the one in repository, thus will be added to the commit. Even if the attacker gets a hold of your gpg key he still needs to bruteforce the password. If the attacker gets an access to remote repository with ciphertext he can compare a bunch of ciphertexts, but their number won't be sufficient to give him any non-negligible advantage.
Later on you can use .gitattributes to provide an on-the-fly decryption for quit git diff of your password.
Also you can have separate keys for different types of passwords etc.
Usually, i seperate password as a config file. and make them dist.
/yourapp
main.py
default.cfg.dist
And when i run main.py, put the real password in default.cfg that copied.
ps. when you work with git or hg. you can ignore *.cfg files to make .gitignore or .hgignore
Provide a way to override the config
This is the best way to manage a set of sane defaults for the config you checkin without requiring the config be complete, or contain things like hostnames and credentials. There are a few ways to override default configs.
Environment variables (as others have already mentioned) are one way of doing it.
The best way is to look for an external config file that overrides the default config values. This allows you to manage the external configs via a configuration management system like Chef, Puppet or Cfengine. Configuration management is the standard answer for the management of configs separate from the codebase so you don't have to do a release to update the config on a single host or a group of hosts.
FYI: Encrypting creds is not always a best practice, especially in a place with limited resources. It may be the case that encrypting creds will gain you no additional risk mitigation and simply add an unnecessary layer of complexity. Make sure you do the proper analysis before making a decision.
Encrypt the passwords file, using for example GPG. Add the keys on your local machine and on your server. Decrypt the file and put it outside your repo folders.
I use a passwords.conf, located in my homefolder. On every deploy this file gets updated.
No, private keys and passwords do not fall under revision control. There is no reason to burden everyone with read access to your repository with knowing sensitive service credentials used in production, when most likely not all of them should have access to those services.
Starting with Django 1.4, your Django projects now ship with a project.wsgi module that defines the application object and it's a perfect place to start enforcing the use of a project.local settings module that contains site-specific configurations.
This settings module is ignored from revision control, but it's presence is required when running your project instance as a WSGI application, typical for production environments. This is how it should look like:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.local")
# This application object is used by the development server
# as well as any WSGI server configured to use this file.
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
Now you can have a local.py module who's owner and group can be configured so that only authorized personnel and the Django processes can read the file's contents.
If you need VCS for your secrets you should at least keep them in a second repository seperated from you actual code. So you can give your team members access to the source code repository and they won't see your credentials. Furthermore host this repository somewhere else (eg. on your own server with an encrypted filesystem, not on github) and for checking it out to the production system you could use something like git-submodule.
This is what I do:
Keep all secrets as env vars in $HOME/.secrets (go-r perms) that $HOME/.bashrc sources (this way if you open .bashrc in front of someone, they won't see the secrets)
Configuration files are stored in VCS as templates, such as config.properties stored as config.properties.tmpl
The template files contain a placeholder for the secret, such as:
my.password=##MY_PASSWORD##
On application deployment, script is ran that transforms the template file into the target file, replacing placeholders with values of environment variables, such as changing ##MY_PASSWORD## to the value of $MY_PASSWORD.
Another approach could be to completely avoid saving secrets in version control systems and instead use a tool like vault from hashicorp, a secret storage with key rolling and auditing, with an API and embedded encryption.
You could use EncFS if your system provides that. Thus you could keep your encrypted data as a subfolder of your repository, while providing your application a decrypted view to the data mounted aside. As the encryption is transparent, no special operations are needed on pull or push.
It would however need to mount the EncFS folders, which could be done by your application based on an password stored elsewhere outside the versioned folders (eg. environment variables).
Hi I'm new to the community and new to Python, experienced but rusty on other high level languages, so my question is simple.
I made a simple script to connect to a private ftp server, and retrieve daily information from it.
from ftplib import FTP
#Open ftp connection
#Connect to server to retrieve inventory
#Open ftp connection
def FTPconnection(file_name):
ftp = FTP('ftp.serveriuse.com')
ftp.login('mylogin', 'password')
#List the files in the current directory
print("Current File List:")
file = ftp.dir()
print(file)
# # #Get the latest csv file from server
# ftp.cwd("/pub")
gfile = open(file_name, "wb")
ftp.retrbinary('RETR '+ file_name, gfile.write)
gfile.close()
ftp.quit()
FTPconnection('test1.csv')
FTPconnection('test2.csv')
That's the whole script, it passes my credentials, and then calls the function FTPconnection on two different files I'm retrieving.
Then my other script that processes them has an import statement, as I tried to call this script as a module, what my import does it's just connect to the FTP server and fetch information.
import ftpconnect as ftpc
This is the on the other Python script, that does the processing.
It works but I want to improve it, so I need some guidance on best practices about how to do this, because in Spyder 4.1.5 I get an 'Module ftpconnect called but unused' warning ... so probably I am missing something here, I'm developing on MacOS using Anaconda and Python 3.8.5.
I'm trying to build an app, to automate some tasks, but I couldn't find anything about modules that guided me to better code, it simply says you have to import whatever .py file name you used and that will be considered a module ...
and my final question is how can you normally protect private information(ftp credentials) from being exposed? This has nothing to do to protect my code but the credentials.
There are a few options for storing passwords and other secrets that a Python program needs to use, particularly a program that needs to run in the background where it can't just ask the user to type in the password.
Problems to avoid:
Checking the password in to source control where other developers or even the public can see it.
Other users on the same server reading the password from a configuration file or source code.
Having the password in a source file where others can see it over your shoulder while you are editing it.
Option 1: SSH
This isn't always an option, but it's probably the best. Your private key is never transmitted over the network, SSH just runs mathematical calculations to prove that you have the right key.
In order to make it work, you need the following:
The database or whatever you are accessing needs to be accessible by SSH. Try searching for "SSH" plus whatever service you are accessing. For example, "ssh postgresql". If this isn't a feature on your database, move on to the next option.
Create an account to run the service that will make calls to the database, and generate an SSH key.
Either add the public key to the service you're going to call, or create a local account on that server, and install the public key there.
Option 2: Environment Variables
This one is the simplest, so it might be a good place to start. It's described well in the Twelve Factor App. The basic idea is that your source code just pulls the password or other secrets from environment variables, and then you configure those environment variables on each system where you run the program. It might also be a nice touch if you use default values that will work for most developers. You have to balance that against making your software "secure by default".
Here's an example that pulls the server, user name, and password from environment variables.
import os
server = os.getenv('MY_APP_DB_SERVER', 'localhost')
user = os.getenv('MY_APP_DB_USER', 'myapp')
password = os.getenv('MY_APP_DB_PASSWORD', '')
db_connect(server, user, password)
Look up how to set environment variables in your operating system, and consider running the service under its own account. That way you don't have sensitive data in environment variables when you run programs in your own account. When you do set up those environment variables, take extra care that other users can't read them. Check file permissions, for example. Of course any users with root permission will be able to read them, but that can't be helped. If you're using systemd, look at the service unit, and be careful to use EnvironmentFile instead of Environment for any secrets. Environment values can be viewed by any user with systemctl show.
Option 3: Configuration Files
This is very similar to the environment variables, but you read the secrets from a text file. I still find the environment variables more flexible for things like deployment tools and continuous integration servers. If you decide to use a configuration file, Python supports several formats in the standard library, like JSON, INI, netrc, and XML. You can also find external packages like PyYAML and TOML. Personally, I find JSON and YAML the simplest to use, and YAML allows comments.
Three things to consider with configuration files:
Where is the file? Maybe a default location like ~/.my_app, and a command-line option to use a different location.
Make sure other users can't read the file.
Obviously, don't commit the configuration file to source code. You might want to commit a template that users can copy to their home directory.
Option 4: Python Module
Some projects just put their secrets right into a Python module.
# settings.py
db_server = 'dbhost1'
db_user = 'my_app'
db_password = 'correcthorsebatterystaple'
Then import that module to get the values.
# my_app.py
from settings import db_server, db_user, db_password
db_connect(db_server, db_user, db_password)
One project that uses this technique is Django. Obviously, you shouldn't commit settings.py to source control, although you might want to commit a file called settings_template.py that users can copy and modify.
I see a few problems with this technique:
Developers might accidentally commit the file to source control. Adding it to .gitignore reduces that risk.
Some of your code is not under source control. If you're disciplined and only put strings and numbers in here, that won't be a problem. If you start writing logging filter classes in here, stop!
If your project already uses this technique, it's easy to transition to environment variables. Just move all the setting values to environment variables, and change the Python module to read from those environment variables.
I'm dealing with a large existing python application and trying to set global defaults for all the boto3 resources and clients that don't have any specified. Since there are many of them, I don't want to update every place the resources are created to create a botocore Config object; so it seems to make sense to use the environment variable approach to configuration. I want to set 4 configurations related to timeout and retry, but of the 4, the documentation only indicates that 2 of them can be set via environment variables. Same for configuring using a config file.
botocore.Config supports connect_timeout, read_timeout, retry mode, and retry max_attempts.
But the environment variables only support AWS_MAX_ATTEMPTS and AWS_RETRY_MODE (at least according to the documentation). How to set the connect_timeout and read_timeout by environment variable?
I don't think this is possible at the moment.
It seems that the environment variable names are all explicitly defined in botocore's config provider.
I'd like to change the environment variable DJANGO_SETTINGS_MODULE (along with a few others) and then have ALL relevant modules like django.conf, django.db etc reloaded to reflect the information from the new settings module. The new settings module will have different database. I will be doing this in a middleware.
I was able to achieve this by reloading a few modules along with django.conf and django.db. All new SQL statements were fired against the new DB.
But this appears to be so hackish.
The main reason for me wanting to do this is to have the same apache child process serve requests for different django applications (different settings and not different apps) without having to recreate a new apache child process which reloads the whole thing.
Is there a clean way of achieving what I want to do?
Thanks,
UPDATE (19-Sept-2014): I have accepted Daniel Roseman's answer as that seems to be the reality in the context of the question asked. The Router approach suggested by him was something that I explored but couldn't use because django's transaction classes don't use the router. The router I presume exists for a different reason. The application code base I'm working on, which is pretty large, has tons of transaction.commit_manually for the default or a specific db alias. I was trying to get this to support multiple client databases without changing the application code.
However, I did manage to solve the main problem which was to support multiple client DBs and other settings. I don't try to change the settings on the fly nor do I use the router. I instead have a single settings.py with all client DB information. I monkey patched the connection handler to return a different database connection for 'default' alias (or other specific alias used by the code) based on certain env variables set in the middleware. So far this has worked fine. I will post an update if I run into any issues or if someone else can point out a potential issue with the approach.
No, there is no way to do this, and that's a very good thing as it is a bad idea. There is no reason to use the same Apache process for different sites: instead you should have different virtual hosts for each of your sites, and let Apache manage them.
I am looking for a template engine for pushing and pulling data from configuration files. To be more specific, Cisco router configuration files. My goal has two parts
1) To be able to template my router config and insert unique data (hostname, interface IP's, ...etc) from an authoritative source (Mysql). Afterwards, I have a mechanism for loading the configs.
2) Once a device is configured and placed into production, I need a way of auditing against the latest version of my template. This would allow us to discover when operators change the running configuration.
Thoughts?
Let's take the simplest approach.
Use whatever language and templating engine you want, write a script that generates a config by e.g. a device name.
To check, generate a config for a device, download the actual config from that device, run diff. Mail the differences, if any, to people in charge of auditing.
The templating engine makes no difference in your case: you have no performance constraints, it seems. I'd take Python + Mako / Jinja / Cheetah, or Ruby + Rails, but even a bash + sed script could work.