How to setup environment specific params for each environment in python - python

I am new to deploying python code that connects to databases up though our environments. How and where would I specify the database server and database info (or any environment specific info) for my application when it runs in our STAGE environment vs our PROD environment?
What is the best practice for doing this in a python application (the application isn't a web app or API app)?
The code is stored in a ADO repo and it has a build and release pipeline that pushes the code out to each VM.

If I understand your problem correctly, you want to have several possible configurations depending on the deployment environment without having to rework the code. If this is the case :
I recommend you to look at the configparser library. It allows you to read a configuration file with different parameters.
You can have one configuration file per environment and put your parameters in it.
For example:
In your file database.cfg
[DATABASE]
DATABASE_TYPE=mysql
USERNAME=user
PASSWORD=Password
IP_ADDRESS=127.0.0.1
PORT=3306
DATABASE_NAME=python
And in your python program
import configparser
config = configparser.ConfigParser()
config.read("database.cfg")
username = config["DATABASE"]["USERNAME"]

Related

How to hide the database connection url when move python flask web project to github? [duplicate]

This question already has answers here:
Passwords in git tree + Heroku + Github
(1 answer)
Where should I place the secret key in Flask?
(2 answers)
Closed 3 years ago.
I develop web app by using python flask framework. After I develop, I push the code to github. Then, heroku webserver take the code from github automatically. I want to hide database connection url and app.secret_key of my app on github. How can I hande this situation?
I need a solution to help me to hide the secret info for the app. Also, I need to move that info to heroku web server by using github.
The Heroku team has actually written a guide regarding best practices for building applications that are deployed within the cloud called the 12 Factor App. They have a section regarding configuration that is a great fit for what you're looking for.
The main concept is that configuration that is either secret, or that change on an environment basis (e.g. local vs production) should be stored as environment variables and refered to as environment variables within your code base.
For example:
DB_HOST = "db.mydomain.com" # Bad practice
DB_HOST = os.environ.get("DB_HOST") # Good practice
If you're working with tools such as Docker and Docker Compose you can automatically load an .env file to load all the environment variables to your environment. This file should be stored outside of your repository and ignored with your .gitignore file.
If you're not using Docker you can also install a python package such as python-dotenv to load the environment variables from the .env file as you work locally.
This can be achieved using environment variables i.e, you set the heroku env variables using heroku cli and access them using your python code. In your case it would be doing this on the heroku cli
heroku config:set DB_URI = your_db_uri_here
and access them in python using
import os
db_uri = os.environ.get('DB_URI', None)
Hope it helps
The Heroku config commands help manage your app's config vars like Database URL's, Secret keys etc. You can read more about it here. Once you set them up in Heroku, you don't need to store them in your code. If you do not prefer to set these values using the Heroku CLI, you can use the Heroku Dashboard as well.
Once you have setup the config vars as described above, you can access them within your code using the environment variables. The following is an example for Python that uses the boto library and establishes an S3 connection, grabbing the S3_KEY and S3_SECRET from the config vars. More examples are available here
from boto.s3.connection import S3Connection
s3 = S3Connection(os.environ['S3_KEY'], os.environ['S3_SECRET'])
Now, you can safely push your code to Github.

Using different dbs on production and test environment

I want to use a test db on my test environment, and the production db on production environment in my Python application.
How should I handle routing to two dbs? Should I have an untracked config.yml file that has the test db's connection string on my test server, and the production db's connection string on production server?
I'm using github for version control and travis ci for deployment.
Let's take Linux environment for example. Often, the user level configuration of an application is placed under your home folder as a dot file. So what you can do is like this:
In your git repository, track a sample configure file, e.g., config.sample.yaml, and put the configuration structure here.
When deploying, either in test environment or production environment, you can copy and rename this file as a dot-file, e.g., $HOME/.{app}.config.yaml. Then in your application, you can read this file.
If you are developing an python package, you can make the file copy operation done in the setup.py. There are some advantages:
You can always track the structure changes of your configuration file.
Separate configuration between test and production enviroment.
More secure, you do not need to code your import db connection information in the public file.
Hope this would be helpful.

How to easily deploy sensitive config data to Amazon Elastic Beanstalk for Flask web server

I have a Flask application that I want to deploy to Amazon Elastic Beanstalk, but have the following problem:
Our configuration contains a lot of secret API keys, so we don't have it checked into Git.
Elastic Beanstalk only deploys the files that are committed to Git, so our config doesn't get uploaded as part of the deployment process.
Elastic Beanstalk offers a way to set and read environment variables - but Flask typically expects environment variables to be set in Python files that are imported at runtime.
Is there a way we can get our configuration file automatically uploaded to AWS EB alongside the source code when we deploy it? I considered having it in a file that gets created via configuration in .ebextensions/... but that would just need to be in Git anyway, defeating the object.
Or do we have to (a) convert all our code to read configuration from environment variables, and (b) create some sort of pre-startup script that injects the values in the current config scripts into the environment? It wouldn't be ideal to have to maintain 2 totally different ways of reading configuration data into our app depending on whether it's running on AWS or not.
I have seen this question/answer - How to specify sensitive environment variables at deploy time with Elastic Beanstalk - and I don't think that adequately addresses the issue because it is not very scalable to large numbers of config options nor deals with the fact that Flask typically expects its config in a different format.
If you are doing a 12 factor application, the simplest way to handle this kind of configuration is to lookup environment variables by default, and fall back on your hard-coded configurations:
from os import environ
from flask.config import Config
class EnvConfiguration(Config):
"""Configuration sub-class that checks its environment
If no environment variable is available,
it falls back to the hard-coded value provided"""
def __getitem__(self, key):
environ.get(key, super(EnvConfig, self).__getitem__(key))
You can then override either Flask.config_class (v1+) or Flask.make_config to ensure that the configuration used is the one you want:
class CustomApp(Flask):
# 1.0+ way
config_class = EnvConfiguration
# 0.8+ way (you only need one of these)
def make_config(self, instance_relative=False):
config = super(CustomApp, self).make_config(instance_relative=instance_relative)
config = EnvConfig(**config)
return config

How do I configure a Flask-based python app using nginx and uwsgi?

I have a Flask-based python app that needs a bunch of configuration information (e.g. database connection parameters) on app start.
In my nginx configuration, I can provide parameters using uwsgi_param, as shown in this SO question.
However my problem is that the request.environ object is not available outside of a request handler, so I can't use it at application start. Furthermore, I'm trying to provide a mostly deployment-agnostic configuration mechanism so that I can test using Flask's embedded server during development and use parameters passed by uWSGI in production.
What's the best way to get configuration into my application so that it's accessible via os.environ or something similar?
Look at http://flask.pocoo.org/docs/config/#development-production. You always can have development, test and production config, you also can get some settings from OS envarment or specific file.
For example I have config module and import some secret settings (which don't want store with source code) from another module. Then on deploy I just replace file with secret settings. Probably better use for this OS envarement.

How do I get live data from my production App Engine app to my local dev app?

I'd like to know if anyone has pointers about how to configure AppEngine remote_api, to so that I can debug my code locally but use the remote_api to fetch some data from my server. That way, I can test against real information.
Thanks!
If you want to debug your own script with using data from High Replication Datastore, then read Using the Remote API in a Local Client. First you need to enable remote_api in app.yaml and upload the application. Then you add this part to your script:
from google.appengine.ext.remote_api import remote_api_stub
def auth_func():
return ('your_username', 'your_password')
remote_api_stub.ConfigureRemoteApi(None, '/_ah/remote_api', auth_func, 'your-app-id.appspot.com')
Now you access data from High Replication Datastore instead from local mockup.
Also if you want to quickly add test data to HRD through console I recommend using PyCharm, which has a feature of running scripts with custom parameters. From PyCharm Menu select Run->Edit Configurations. Create new configuration, set the following parameters:
Name: Name of the script
Script: Point to your $GAE_SDK_ROOT\remote_api_shell.py
Script parametres: -s your_app_id.appspot.com
Working directory: I recommend setting this. You probably want to test entities and to successfully import class definitions it is best to be in root directory of your application. So set it to ROOT of your application.
Now when you run or debug specified configuration PyCharm will open a python console, prompting you to connect to GAE with your username and password. Now you can use it for manipulating data on Google servers.
For more information on remote_api read:
Accessing the datastore remotely with remote_api
Remote API for Python
For more information on Pycharm custom configurations, read:
Creating and Editing Run/Debug Configurations
You could download data as described here, and use it to populate your local dev app. There's no reason that PyCharm needs to be involved.

Categories