Say I have some API-key in my project that I don't wanna share into git repository, then I have to use environment variables. Now, why shouldn't I blatantly set the environmental variable on my local machine (like PATH) instead of making .env file and downloading python-dotenv library to my project for the sake of doing actually the same thing?
.env file is just manner to store this variables in file that can use packages for example dotenv to read as an os.environ varaible. so in short it is manner of storage of configuration.
many times your gitignore will have .env thus users can store the API key with .env file on local machine to help ease of use and ensuring dont accidently leave api keys within committed git files
if you just do a bear os.environ['API-KEY'] = 'stuff' within code it will store in commited git file (and env variables are there within a run of python process, not their permanently between sessions, thus why storing in file is preferred)
with this there are types of way to store configuration dynaconf is a great package and package shows many of the other types of configuration files
Related
I use python to develop code on my work laptop and then deploy to our server for automation purposes.
I just recently started using git and github through PyCharm in hopes of making the deployment process smoother.
My issue is that I have a config file (YAML) that uses different parameters respective to my build environment (laptop) and production (server). For example, the file path changes.
Is there a git practice that I could implement that when either pushing from my laptop or pulling from the server it will excluded changes to specific parts of a file?
I use .gitignore for files such as pyvenv.cfg but is there a way to do this within a file?
Another approach I thought of would be to utilize different branches for local and remote specific parameters...
For Example:
Local branch would contain local parameters and production branch would contain production parameters. In this case I would push 1st from my local to the local branch. Next I would make the necessary changes to the parameters for production, in my situation it is much easier to work on my laptop than through the server, then push to the production branch. However, I have a feeling this is against good practice or simply changes the use of branches.
Thank you.
Config files are also a common place to store credentials (eg : a login/pwd for the database, an API key for a web service ...) and it is generally a good idea to not store those in the repository.
A common practice is to store template files in the repo (eg : config.yml.sample), to not store the actual config file along with the code (even add it in .gitignore, if it is in a versioned directory), and add steps at deployment time to either set up the initial config file or update the existing one - those steps can be manual, or scripted. You can backup and version the config separately, if needed.
Another possibility is to take the elements that should be adapted from somewhere else (the environment for instance), and have some user: $APP_DB_USER entries in your config file. You should provision these entries on both your servers - eg : have an env.txt file on your local machine and a different one on your prod server.
Hi I am relatively new to programming and building my first flask project and I haven't been able to figure out if I should prefer accessing environment variables by using dotenv / load_dotenv or using them from a config.py file.
I understand the config route is more flexible but my question is specifically to do with environment variables.
Is there a best practice here? [I am building a simple app that will be hosted externally]
Best practices dictate that any value which is secret should not be hard-coded into any files which persist with the project or are checked into source control. Your config file is very likely to be saved in source control, so it should not store secrets, but instead load them from the environment variables set at execution time of the app. For example, let's say you are configuring an SMTP relay:
MAIL_PORT is a value that is not secret and not likely to change so it is a good candidate to be set in your config file.
MAIL_PASSWORD is a secret value that you do not want saved in your project's repository, so it should be loaded from the host's environment variables.
In this example, your config file might contain entries that look something like this:
MAIL_PORT = 465
MAIL_PASSWORD = os.environ.get('MAIL_PASSWORD')
Beyond evaluating whether or not a config value is a secret, also consider how often the value will change and how hard it would be to make that change. Anything hard-coded into your config file will require changing the file and adding a new commit to your source control, possibly even triggering a full CI/CD pipeline process. If the value were instead loaded from environment variables then this value could be changed by simply stopping the application, exporting the new value as an environment variable, and restarting the application.
Dotenv files are simply a convenience for grouping a number of variables together and auto-loading them to be read by your configuration. A .env file is not always used as these values can be manually exported when the app is invoked or handled by another system responsible for starting or scaling your application. Do not check .env or .flaskenv files into your source control.
I have a publicly hosted repository on GitHub which I require to manage updates for scripts on my server. I would like my scripts to call some sensitive arguments automatically, however I don't want those arguments to be public.
My thoughts are to .gitignore a config file with my sensitive arguments and manually copy the config file when installing. Alternatively I was thinking of including an encrypted config file in the GitHub repository and manually inputting the hash as an environmental arg on my server.
What is the best practice to achieve the outcome? Am I missing something completely? Any info or advice would be appreciated.
Adding a config file to .gitignore and manually copying it over is a commonly used approach and is fine. I'd say the most common approach is to use environment variables, although you'd still have to manually configure them on your server. Here's a short article with some good examples on these approaches.
I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something?
I would suggest using some environment variable, like MYPROJECT_SETTINGS. So when running the task by Jenkins you can overwrite the default path to whatever you can put your settings file for Jenkins in.
The other option, in case you don't want to copy settings file to each build-machine by hand, would be making a settings.py with some default fake keys, which you can add to your repo, and a local settings file with real keys, which overwrites some options, e.g.:
# settings.py file
SECRET_KEY = 'fake stuff'
try:
from settings_local import *
except ImportError:
pass
I am using the Copy Data To Workspace Plugin for this, Copy to Slave plugin should also work, but I found Copy Data To Workspace Plugin to be easier to work with for this use-case.
Why just not use "echo my-secret-keys > settings.txt" in jenkins and adjust your script to read this file so you can add it to report?
I'm fairly new to Python and Django, and I'm working on a webapp now that will be run on multiple servers. Each server has it's own little configuration details (commands, file paths, etc.) that I would like to just be able to store in a settings file, and then have a different copy of the file on each system.
I know that in Django, there's a settings file. However, is that only for Django-related things? Or am I supposed to put this type of stuff in there too?
This page discusses yours and several other situations involving configurations on Django servers: http://code.djangoproject.com/wiki/SplitSettings
It's for any type of settings, but it's better to put local settings in a separate file so that version upgrades don't clobber them. Have the global settings file detect the presence of the local settings file and then either import everything from it or just execfile() it.
There isn't a standard place to put config files. You can easily create your own config file as needed though. ConfigParser might suit your needs (and is a Python built-in).
Regardless of the format that I use for my config file, I usually have my scripts that depend on settings get the config file path from environment variables
(using bash):
export MY_APP_CONFIG_PATH=/path/to/config/file/on/this/system
and then my scripts pick up the settings like so:
import os
path_to_config = os.environ['MY_APP_CONFIG_PATH']