Deploying Python on Elastic Beanstalk with different configurations for different environments - python

AWS does not properly explain how to manage different deployment environments on the beanstalk with relation to different environments and how to save those settings in your source control repo.
They clearly explain how to setup your python.config in .ebextensions like so:
"aws:elasticbeanstalk:container:python:environment":
DJANGO_SETTINGS_MODULE: "settings"
SERVER_ROOT: "/opt/python/current/app/"
However, if you want to have multiple environments like staging and prod you currently have to swap out your configuration files. Whats worse, how do you plan to retain this in your source control tree for shared environments like staging? It appears that every time you push you will need these configuration environment settings.
I've also found that AWS doesnt let me deploy unstaged changes which means writing a script to handle my deployments isnt an option either. What am I missing here?

Haven't tried it, but it appears that you can pass DJANGO_SETTINGS_MODULE not through configuration file, but through container's own parameters. You can update it through Environment Details -> Edit Configuration -> Container section of Beanstalk console.
Just as idea:
Create multiple environments "production", "staging", etc
Configure each with relevant DJANGO_SETTINGS_MODULE value
Remove DJANGO_SETTINGS_MODULE value from .ebextensions
Deploy application to pre-created environment

I did some digging on this in the past and it seems that they like you to use eb branch to configure the different environments and then configure the option differently within the optionsettings locally on the eb client level (when you init the branch and stuff).
When you think about it, the environment configuration (i.e. DJANGO_SETTINGS_MODULE) should be managed separately from the application code, so I just assume keep it out of the ebextensions and set it up when I navigate to a new environment. If I switch to an existing one need to make sure that value is set properly for the env I want to play in.

Related

Heroku config variables appear in console but are set to None when accessing with os.environ.get

I'm trying to upload files to an S3 bucket and I'm using this tutorial. I'm setting config variables in the terminal using heroku config:set and when I enter heroku config in the terminal, my variables appear.
However, if I access S3_BUCKET in code when running locally, the value returned is None:
S3_BUCKET = os.environ.get('S3_BUCKET')
print("value of S3_BUCKET: " + S3_BUCKET)
This prints None.
Does anyone know why this doesn't work? The frontend of my application is in React, if that matters, but all of the bucket upload code is done in python.
heroku config:set sets variables on Heroku. Like their name suggests, environment variables are specific to a particular environment. Settings set this way have no impact on your local machine.
There are several ways to set environment variables locally. One common method involves putting them into an untracked .env file. Many tools, including heroku local, read that file and add variables found there to the environment when running locally.
Heroku recommends this, though you will have to make sure whatever tooling you use picks up those variables. Aside from heroku local, Visual Studio Code understands .env files for Python projects. So does Pipenv. There are also standalone tools for this like direnv.

Pass environment variables to GAE instance

I'm using GAE to deploy my app and I have some environment variables that I would like to pass to my GAE instance. For example, every time I use the DB, the unix socket assignment is currently like this:
unix_socket='<my_path_to_my_sockets>/{instance}'
.format(instance=properties.get_property('data.instance'))
That is great but the problem is that it's a shared code and everytime someone makes a local test it changes the path and push the changes to the repository. When someone pull the new changes then it needs to change the in order to make the db requests because everyone have a different path to the sockets changes too. So I've created the following statement:
unix_socket= (os.environ['CLOUDSQL_INSTANCES_DIR']
if 'CLOUDSQL_INSTANCES_DIR' in os.environ
else '/home/cpinam/cloudsql/')
+ properties.get_property('cloud.instance')
So, if someone has an environment variable in its system, then it takes the variable and avoid the absolute path. The problem is that this environment variable is not referring to my computer but GAE instance.
The question is, how can I take my environment variable instead of any environment variable of server instance? Is it possible?
PS: I've know that I can pass environment variables through the app.yaml file but that implies to modify the file.
Thank you
App Engine does not support what you want in the way that you want it.
There are a few alternative approaches that you might want to consider. It sounds like your primary constraint is wanting to enable individual developers to store alternative configuration to what they might find in production. You might want to consider allowing developers to store said configuration in their local Datastore instances.
Your code would look something like:
if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine/'):
unix_socket = os.environ['CLOUDSQL_INSTANCES_DIR']
...
else:
unix_socket = your_configuration_type.get("CLOUDSQL_INSTANCES_DIR")
...
Another alternative would be the approach outlined here where you'd store the relevant env vars in your own version of client_secrets.json and made sure that file was listed in .gitignore.
This can be currently done using the original App Engine SDK's appcfg.py command for deployment. This is not possible using the gcloud App Engine deployment as far as I know.
You can define a default environment variable in your app.yaml file:
env_variables:
HOST_NAME: ''
Pass your environment variable using -E option of the appcfg.py command -
-E NAME:VALUE. Ex : -E HOST_NAME:WOAH
-E description : Set an environment variable, potentially overriding an env_variable value from app.yaml file (flag may be repeated to set multiple variables).

Is there a better way to set a gcloud project in a directory?

I work on multiple appengine projects in any given week. i.e. assume multiple clients. Earlier I could set application in app.yaml. So whenever I did appcfg.py update.... it would ensure deployment to the right project.
When deploying, the application variable throws an error with gcloud deploy. I had to use
gcloud app deploy --project [YOUR_PROJECT_ID]. So what used to be a directory level setting for a project, is now going into our build tooling. And missing out that simple detail can push a project code to the wrong customer.
i.e. if I did gcloud config set project proj1 and then somehow did a gcloud app deploy in proj2, it would deploy to proj1. Production deployments are done after detailed verification on the build tools and hence it is less of an issue there because we still use the --project flag.
But its hard to do similar stuff on the development environment. dev_appserver.py doesn't have a --project flag.
When starting dev_appserver.py I've to do gcloud config set project <project-id> before I start the server. This is important when I using stuff like PubSub or GCS (in dev topics or dev buckets).
Unfortunately, missing out a simple configuration like setting a project ID in a dev environment can result into uploading blobs/messages/etc into the wrong dev gcs bucket or wrong dev pubsub topic (not using emulators). And this has happened quite a few times especially when starting new projects.
I find the above solutions as hackish-workarounds. Is there a good way to ensure that we do not deploy or develop in a wrong project when working from a certain directory?
TL;DR - Not supported based on the current working directory, but there are workarounds.
Available workarounds
gcloud does not directly let you set up a configuration per working directory. Instead, you could use one of these 3 options to achieve something similar:
Specify --project, --region, --zone or the config of interest per command. This is painful but gets the job done.
Specify a different gcloud configuration directory per command (gcloud uses ~/.config/gcloud on *nix by default):
CLOUDSDK_CONFIG=/path/to/config/dir1 gcloud COMMAND
CLOUDSDK_CONFIG=/path/to/config/dir2 gcloud COMMAND
Create multiple configurations and switch between them as needed.
gcloud config configurations activate config-1 && gcloud COMMAND
Shell helpers
As all of the above options are ways to customize on the command line, aliases and/or functions in your favorite shell will also help make things easier.
For example in bash, option 2 can be implemented as follows:
function gcloud_proj1() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir1 $#
}
function gcloud_proj2() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir2 $#
}
gcloud_proj1 COMMAND
gcloud_proj2 COMMAND
There's a very nice way I've been using with PyCharm, I suspect you can do so with other IDEs.
You can declare the default env variables for the IDE Terminal, so when you open a new terminal gcloud recognises these env variables and sets the project and account.
No need to switch configurations between projects manually (gcloud config configurations activate ). Terminals open in other projects will inherit it's own GCP project and config from the ENV variables.
I've had this problem for years and I believe I found a decent compromise.
Create a simple script called contextual-gcloud. Note the \gcloud, fundamental for future aliasing.
🐧$ cat > contextual-gcloud
#!/bin/bash
if [ -d .gcloudconfig/ ]; then
echo "[$0] .gcloudconfig/ directory detected: using that dir for configs instead of default."
CLOUDSDK_CONFIG=./.gcloudconfig/ \gcloud "$#"
else
\gcloud "$#"
fi
Add to your .bashrc and reload / start new bash. This will fix autocompletion.
alias gcloud=contextual-gcloud
That's it! If you have a directory called that way the system will use that instead, which means you can load your configuration into source control etc.. only remember to git ignore stuff like logs, and private stuff (keys, certificates, ..).
Note: auto-completion is fixed by the alias ;)
Code: https://github.com/palladius/sakura/blob/master/bin/contextual-gcloud
These are exactly the reasons for which I highly dislike gcloud. Making command line argument mandatory and dropping configuration files support, much too error prone for my taste.
So far I'm still able to use the GAE SDK instead of Google Cloud SDK (see What is the relationship between Google's App Engine SDK and Cloud SDK?), which could be one option - basically keep doing stuff "the old way". Please note that it's no longer the recommended method.
You can find the still compatible GAE SDKs here.
For whenever the above will no longer be an option and I'll be forced to switch to the Cloud SDK my plan is to have version-controlled cheat-sheet text files in each app directory containing the exact cmds to use for running the devserver, deploy, etc for that particular project which I can just copy-paste into the terminal without fear of making mistakes. You carefully set these up once and then you just copy-paste them. As a bonus you can have different branch versions for different environments (staging/production, for example).
Actually I'm using this approach even for the GAE SDK - to prevent accidental deployment of the app-level config files to the wrong GAE app (such deployments must use cmdline arguments to specify the app in multi-service apps).
Or do the same but with environment config files and wrapper scripts instead of cheat-sheet files, if that's your preference.

Best way to handle different configuration/settings based on environment in Django project

Is DEBUG == False supposed to mean that the app is running in production environment?
At least, that's what I see occasionally on the internet. But what do I put in settings.py then? Okay, I can put local settings to, say, settings_local.py and import it from settings.py. But if some settings depend on environment, than I've got to put them after import statement. There more I think about it, the more I don't like it. And you?
As an answer to the question:
Is DEBUG == False supposed to mean that the app is running in production environment?
DEBUG is a configuration that you define in your setting.py file.
If set to True, in case of un-handled exception it displays the complete stack-trace along with the values of all the declared variables.
If set to False, your server just returns the 500 status code without any stack-trace.
In production, you must have DEBUG set to False in order to prevent potential risk of security breach, and other information which you wouldn't want your user to know.
In order to use different settings configuration on different environment, create different settings file. And in your deployment script, start the server using --settings=<my-settings.py> parameter, via which you can use different settings on different environment.
Benefits of using this approach:
Your settings will be modular based on each environment
You may import the master_settings.py containing the base configuration in the environmnet_configuration.py and override the values that you want to change in that environment.
If you have huge team, each developer may have their own local_settings.py which they can add to the code repository without any risk of modifying the server configuration. You can add these local settings to .gitnore if you use git or .hginore if you Mercurial for Code Version Control. That way local settings won't even be the part of actual code base keeping it clean.
Use django-configurations to define a some alternative configurations and set the environment variable DJANGO_CONFIGURATION on each machine running the code to choose one.
You're free to define the classes however you'd like, but I'd recommend defining a Common class, which everything inherits from, and then Dev, Test, and Prod classes.
Anything involving system configuration should be pulled from environment variables (eg database connection, cache connection etc).
Have fun!
To add to #anonymous answer, here's the script (manage.sh) I came up with:
#!/usr/bin/env bash
DIR=$(dirname -- "$(readlink -f -- "$0")")
export PYTHONPATH="$DIR${PYTHONPATH:+:$PYTHONPATH}"
if [ -e <app>/local_settings.py ]; then
export DJANGO_SETTINGS_MODULE=<app>.local_settings
else
export DJANGO_SETTINGS_MODULE=<app>.settings
fi
django-admin "$#"
It uses <app>.local_settings if exists. Otherwise it falls back to <app>.settings.
Alternatively, you can just edit your manage.py.

Configure Sentry for different environments (staging, production)

I want to configure Sentry in a Django app to report errors using different environments, like staging and production. This way I can configure alerting per environment.
How can I configure different environments for Raven using different Django settings? The environment variable is not listed at the Raven Python client arguments docs, however I can find the variable in the raven-python code.
If you are setting environment as a constant within Django settings, you can set the environment argument when initializing the raven-python client.
You're correct—our docs didn't include the environment argument. I've updated them to include it. Thanks for raising the issue.
You can use different settings for different branches. You have your main one, with all shared settings. And for develop branch you have dev.py settings and for production you have your prod.py. And while deploying your app you just specify which settings are meant to be used. If not you can also use GitPython package. Where you make something like this:
if branch in ['develop']:
DEBUG = True
RAVEN_CONFIG = {
'dsn': 'your_link_to_raven',
}
else:
#some other settings

Categories