Python Config File with Sensitive Arguments - python

I have a publicly hosted repository on GitHub which I require to manage updates for scripts on my server. I would like my scripts to call some sensitive arguments automatically, however I don't want those arguments to be public.
My thoughts are to .gitignore a config file with my sensitive arguments and manually copy the config file when installing. Alternatively I was thinking of including an encrypted config file in the GitHub repository and manually inputting the hash as an environmental arg on my server.
What is the best practice to achieve the outcome? Am I missing something completely? Any info or advice would be appreciated.

Adding a config file to .gitignore and manually copying it over is a commonly used approach and is fine. I'd say the most common approach is to use environment variables, although you'd still have to manually configure them on your server. Here's a short article with some good examples on these approaches.

Related

What is the best way to manage client/server specific files with git?

I use python to develop code on my work laptop and then deploy to our server for automation purposes.
I just recently started using git and github through PyCharm in hopes of making the deployment process smoother.
My issue is that I have a config file (YAML) that uses different parameters respective to my build environment (laptop) and production (server). For example, the file path changes.
Is there a git practice that I could implement that when either pushing from my laptop or pulling from the server it will excluded changes to specific parts of a file?
I use .gitignore for files such as pyvenv.cfg but is there a way to do this within a file?
Another approach I thought of would be to utilize different branches for local and remote specific parameters...
For Example:
Local branch would contain local parameters and production branch would contain production parameters. In this case I would push 1st from my local to the local branch. Next I would make the necessary changes to the parameters for production, in my situation it is much easier to work on my laptop than through the server, then push to the production branch. However, I have a feeling this is against good practice or simply changes the use of branches.
Thank you.
Config files are also a common place to store credentials (eg : a login/pwd for the database, an API key for a web service ...) and it is generally a good idea to not store those in the repository.
A common practice is to store template files in the repo (eg : config.yml.sample), to not store the actual config file along with the code (even add it in .gitignore, if it is in a versioned directory), and add steps at deployment time to either set up the initial config file or update the existing one - those steps can be manual, or scripted. You can backup and version the config separately, if needed.
Another possibility is to take the elements that should be adapted from somewhere else (the environment for instance), and have some user: $APP_DB_USER entries in your config file. You should provision these entries on both your servers - eg : have an env.txt file on your local machine and a different one on your prod server.

Trouble deciding whether to use autoenv or python dotenv

I've been exposed to both of these tools , but they seem to serve the same purpose. My question is are they different and if so, how?
In my research it seems to me that autoenv is global in scope while dotenv is a bit more application specific. While this seems an advantage in many cases, I wonder if it could also create unforeseen problems.
Second what would be the pros / cons of using one over the other (or should I use each in different situations?)
I've read through documentation for each, but have been unable to find an article comparing the two. It is relatively recent that I've developed a stronger grasp on environment variables in general so apologies if I missed something obvious in the documentation.
I'm primarily developing web apps with Flask and deploying on Heroku if that would influence my choice.
Thanks in advance.
autoenv, is meant for the cli, to enable environments when you cd into a directory containing an .env file.
Fx. if you need some of the environment variables on your local development environment whenever you cd to the directory, you would use autoenv or the more mature alternative direnv.
dotenv is used in python to find an .env file in the running directory or parent directories and load their variables, this is good for services as they usually don't have a shell running.
So for your heroku deployments you should use dotenv.
If you however are putting in config vars straight in the heroku settings, you don't need either, you would simply use os.getenv:
from os import getenv
print(getenv('MY_ENVIRONMENT_VARIABLE'))

Python app configuration best practices

I know this issue has been discussed before, but I am struggling to find a starightforward explanation of how to approach configuration between local development and production server.
What I have done so far: I had one my_app_config.py file that had a section with machine / scenario (test vs production) sections I could just comment out. I would develop with my local machine path hardcoded, test database connection string, my test spreadsheet location, etc. When it comes time to deploy the code to the server, I comment out the "test" section and uncomment the "production section". As you may guess, this is wrought with errors.
I recently adopted the Python ConfigParser library to use .ini files. Now, I have the following lines in my code
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read(os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', 'settings',
'my_app_config.ini')))
database_connect_string_admin = config.get('Database', 'admin_str')
The problems with this are many...
I need to have the import at the top of every file
The filename my_app_config.ini can't change. So, I rely on comments within the content of the .ini file to know which one I'm dealing with. They are stored in a folder tree so I know which is which.
notice the path to the config file is defined here. So, depending where the python file lives in the tree structure dictates if I get a copy / paste error.
I tried to set environment variables at the beginning of the program, but all the imports for all modules are performed right away at code launch. I was getting "not found" errors left and right.
What I want: To understand how to keep all the configurations stored in one place that is not easy to lose track of what I am doing. I want an easy way to keep these configuration files (ideally one file or script) under version control (security is a whole other issue, I digress). I want to be able to seamlessly switch contexts (local-test, local-production, serverA-test, serverA-production, serverB-test, serverB-production) My app uses
my_app_config.ini read by my parser
uwsgi.ini read by the uwsgi application server emperor
web_config.py used by the flask application
nginx.conf symlinked to the web server's configuration
celery configuration
not to mention different paths for everything (ideally handled within the magic config handling genie). I imagine once I figure this out I will be embarrassed it took so long to grasp.
Are Environment variables what I am trying to do here?
You have to try `simple-settings. It will resolve all you issues. One way set environment variable
in development
$ export SIMPLE_SETTINGS=settings.general,settings.development
$ python app.py
in production
$ export SIMPLE_SETTINGS=settings.general,settings.production
$ python app.py
You can keep `` development.pyandproduction.py` not in a repository for security reasons.
Example
settings/general.py
SIMPLE_CONF = 'simple'
app.py
from simple_settings import settings
print(settings.SIMPLE_CONF)
The documentation indicated many more features and benefits.

Using un-managed file in Jenkins build step

I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something?
I would suggest using some environment variable, like MYPROJECT_SETTINGS. So when running the task by Jenkins you can overwrite the default path to whatever you can put your settings file for Jenkins in.
The other option, in case you don't want to copy settings file to each build-machine by hand, would be making a settings.py with some default fake keys, which you can add to your repo, and a local settings file with real keys, which overwrites some options, e.g.:
# settings.py file
SECRET_KEY = 'fake stuff'
try:
from settings_local import *
except ImportError:
pass
I am using the Copy Data To Workspace Plugin for this, Copy to Slave plugin should also work, but I found Copy Data To Workspace Plugin to be easier to work with for this use-case.
Why just not use "echo my-secret-keys > settings.txt" in jenkins and adjust your script to read this file so you can add it to report?

any way to composite configuration/.ini files in pylons?

We're running a pylons app with multiple ini files (production, staging, development, etc). When a new setting is added that can be the same in all environments, it would be great to be able to set it once in some sort of master configuration that gets included with all .ini files. Or included via some other way to load centralized config as well as deploy-specific config.
It doesn't look like there's an "import" syntax for pylons ini files. What's the best way to achieve this type of config compositing for pylons, if any?
You can use the ConfigParser module.

Categories