Unable to get environment variable - python

In my __init__.py file I am trying to set:
import os
app.config['SECRET_KEY'] = os.environ['MY_KEY']
but am getting the error:
raise KeyError(key)
KeyError: 'MY_KEY'
When I run printenv, the variable MY_KEY is present.
Also in IDLE I tried running:
import os
print os.environ['MY_KEY']
and I get the correct output.
I set MY_KEY in /etc/profile using:
export MY_KEY="1234example_secret_key"
I did restart my computer after making the change to the profile file.
Would anyone know what the issue may be?
Thanks for your help.

If you are running your process under supervisor and/or gunicorn (judging but you comments you are) you can user supervisor environment config param.
[program:my_app]
...
environment = MY_KEY="ABCD",MY_KEY2="EFG"
You can also use gunicorn --env flag or env config file param.
gunicorn -b 127.0.0.1:8000 --env MY_KEY=ABCD test:app
The downside of the 2nd option is that your key will be visible to anyone who has access to your machine.
The best approach is to use the app.config.from_envvar function and store your config into machine specific config settings location (this could be on encrypted filesystem). In this case your code would look like this:
app = Flask(__name__)
...
app.config.from_envvar('MACHINE_SPECIFIC_SETTINGS')
Your MACHINE_SPECIFIC_SETTINGS env variable could point to the file that will have the MY_KEY value.
MACHINE_SPECIFIC_SETTINGS=/path/to/config.py

Related

Heroku Python Local Environment Variables

Working on a heroku django project.
Goal: I want to run the server locally with same code on cloud too (makes sense).
Problem:
Environments differ from a linux server (heroku) to a local PC windows system.
Local environment variables differ from cloud Heroku config vars.
Heroku config vars can be setup easily using the CLI heroku config:set TIMES=2.
While setting up local env vars is a total mess.
I tried the following in cmd:
py -c "import os;os.environ['Times']=2" # To set an env var
Then ran py -c "import os;os.environ.get('Times','Not Found')" stdout: "Not Found".
After a bit of research it appeared to be that such env vars are stored temporarily per process/session usage.
Solution theory: Redirect os.environ to .env file of the root heroku project instead of the PC env vars. So I found this tool direnv perfect for Unix-like OSs but not available for Windows.
views.py code (runs perfect on cloud, sick on the local machine):
import os
import requests
from django.shortcuts import render
from django.http import HttpResponse
from .models import Greeting
def index(request):
# get method takes 2 parameters (env_var_string,return value if var is not found)
times = int(os.environ.get('TIMES',3))
return HttpResponse('<p>'+ 'Hello! ' * times+ '</p>')
def db(request):
greeting = Greeting()
greeting.save()
greetings = Greeting.objects.all()
return render(request, "db.html", {"greetings": greetings})
Main Question: Is there a proper way to hide secrets locally in windows and access them by os.environ['KEY']?
Another solution theory: I was wondering if a python virtual environment has it's own environment variables. If yes i activate a venv locally without affecting the cloud. Therefore os.environ['KEY'] is redirected to the venv variables. Again it's just a theory.
You can use environment variables which you can get via os.environ['KEY'].
The same code will work on both local development and on Heroku.
On Heroku define these variables using ConfigVars heroku config:set KEY=val while locally (on Windows for example) define the same variables in an .env file (use dotenv to load them). The .env file is never committed with the source code.

Flask virtual environment and environment variables

To get gunicorn under supervisord to use the virtual environment /home/ubuntu/venv/bin it is not necessary to seek a judicious place to put source /home/ubuntu/venv/bin/activate. It is sufficient to write:
[program:hello]
command=/home/ubuntu/venv/bin/gunicorn -b localhost:8000 hello:app
directory=/home/ubuntu/hello/
environment=PATH="/home/ubuntu/venv/bin:%(ENV_PATH)"
in /usr/supervisor/hello.conf.
The next task is to bring in a whole slew of environment variables. One way is to laboriously augment the supervisord config file as follows.
[program:hello]
...
environment=PATH="/home/ubuntu/venv/bin:%(ENV_PATH)",SECRET_KEY="%(ENV_SECRET_KEY)",DATABASE_URI="%(ENV_DATABASE_URI)",etc1,etc2,etc3
Is there a way to bring in the environment variables in one shot (after they're initialized in, say, ~/.profile?
Related:
1, 2, 3, 4, 5, 6, 7
Here is a recipe:
Write the environment variables in a file /home/ubuntu/prog/.env.
export FLASK_APP=/home/ubuntu/prog/hello.py
export SECRET_KEY=ABCD
export DATABASE_PASSWORD=EFGH
Use dotenv's load_dotenv to load the environment variables.
from flask import Flask
from os.path import join, dirname
from os import environ
from dotenv import load_dotenv
app = Flask(__name__)
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
#app.route('/')
def hello():
SECRET_KEY = environ.get("SECRET_KEY")
DATABASE_PASSWORD = environ.get("DATABASE_PASSWORD")
return SECRET_KEY + DATABASE_PASSWORD
Write a file /etc/supervisor/hello.conf.
[program:hello]
command=/home/ubuntu/venv/bin/gunicorn -b localhost:8000 hello:app
directory=/home/ubuntu/prog
stdout_logfile=/home/ubuntu/prog/hello_out.log
stderr_logfile=/home/ubuntu/prog/hello_err.log
user=ubuntu
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
[supervisord]
logfile=/home/ubuntu/prog/hello_supervisord.log
pidfile=/tmp/supervisord.pid
Load the environment and point to the the app.
source /home/ubuntu/prog/.env
The environment variables are now loaded,
$ export | grep SECRET
declare -x SECRET_KEY="ABCD"
and they will be passed to the sub-process without messing with supervisord's environment=.
Launch supervisord in the foreground to confirm all is well.
/usr/bin/supervisord -n -edebug -c /etc/supervisor/hello.conf
Confirm from another shell that all is well.
$ curl localhost:8000
ABCDEFGH
Kill supervisord. Since it's in the foreground, it's enough to CTRL-c it.
Launch supervisord as a daemon.
/usr/bin/supervisord -c /etc/supervisor/hello.conf
Keep an eye on the three log files prog/hello_out.log, prog/hello_err.log, and prog/hello_supervisord.log.
Perhaps the most important point is to avoid using supervisord's environment=. SO chatter suggests that it handles commas, quotation marks, tabs, even newlines. Empirically, this doesn't hold (at least for supervisord 3.3.5), and the documentation does not settle it one way or the other. The two lines 942-943
seem to be where the parsing happens, if someone cares to investigate the insufficiency of the docs.
To bring in the environment variables in one shot, permanently, securely you have to add following lines to .bashrc in $HOME directory.
For this open the .bashrc file in your home directory with your favorite code editor:
nano .bashrc
Add the following line somewhere in your .bashrc file:
export SECRET_KEY="YOUR SECRET KEY."
Now to use this SECRET_KEY in flask you have to import os module and make use of it like this.
from flask import Flask
import os
app = Flask(__name__)
#app.route('/')
def hello():
SECRET_KEY = os.environ.get('SECRET_KEY')
return SECRET_KEY
Hope this helps.

Why does gunicorn not see the corrent environment variables?

On my production server, I've set environment variables both inside and outside my virtualenv (only because I don't understand this issue going on) including a variable HELLO_WORLD_PROD which I've set to '1'. in the python interpreter, both inside and out my venv, os.environ.get('HELLO_WORLD_PROD') == '1' returns True. In my settings folder, I have:
import os
if os.environ.get('HELLO_WORLD_PROD') == '1':
from hello_world.settings.prod import * # noqa
else:
from hello_world.settings.dev import * # noqa
Both prod.py and dev.py inherit from base.py, and in base DEBUG = False, and only in dev.py does DEBUG = True.
However, when I trigger an error through the browser, I'm seeing the debug page.
I'm using nginx and gunicorn. Why is my application importing the wrong settings file?
You can see my gunicorn conf here
Thanks in advance for your patience!
I was using sudo service gunicorn start to run gunicorn. The problem is service strips all environment variables but TERM, PATH and LANG. To fix it, in my exec line in my gunicorn.conf I added the environment variables there using the --env flag, like exec env/bin/gunicorn --env HELLO_WORLD_PROD=1 --env DB_PASSWORD=secret etc.

Docker flask application environment variables

I'm starting a docker container the following way:
docker run -e IP_AD=192.168.99.100 -p 80:80 flask_app
I'm simply trying to pass an IP Address to the flask application so that something can be loaded from my application. This resource will change from environment to environment, so this is the reason I would like to pass it as an environment variable.
Later, I would like to use this variable but from the context of the running flask application. How can I load IP_AD from my flask application and use it as a python variable?
I've tried doing this:
import os
os.environ.get('IP_AD')
But it does not seem to be loading anything. What is the correct way to load IP_AD passed from docker run -e
create file .env like this
model_path=./model/data_fs.csv
then install library .env by doing
pip3 install python-dotenv==0.17.1
add this to your code to load all your .env variable
from dotenv import load_dotenv
load_dotenv()
so you can access it
if os.getenv("model_path", None) is not None:
return os.getenv("model_path", None)
# otherwise raise an exception
raise Exception("no .env file found")
you can try like this
import os
os.environ["IP_AD"]

Setting NewRelic environment on Dotcloud (Python)

I have a Python application that is set up using the new New Relic configuration variables in the dotcloud.yml file, which works fine.
However I want to run a sandbox instance as a test/staging environment, so I want to be able to set the environment of the newrelic agent so that it uses the different configuration sections of the ini configuration. My dotcloud.yml is set up as follows:
www:
type: python
config:
python_version: 'v2.7'
enable_newrelic: True
environment:
NEW_RELIC_LICENSE_KEY: *****************************************
NEW_RELIC_APP_NAME: Application Name
NEW_RELIC_LOG: /var/log/supervisor/newrelic.log
NEW_RELIC_LOG_LEVEL: info
NEW_RELIC_CONFIG_FILE: /home/dotcloud/current/newrelic.ini
I have custom environment variables so that the sanbox is set as "test" and the live application is set to "production"
I am then calling the following in my uswsgi.py
NEWRELIC_CONFIG = os.environ.get('NEW_RELIC_CONFIG_FILE')
ENVIRONMENT = os.environ.get('MY_ENVIRONMENT', 'test')
newrelic.agent.initialize(NEWRELIC_CONFIG, ENVIRONMENT)
However the dotcloud instance is already enabling newrelic because I get this in the uwsgi.log file:
Sun Nov 18 18:50:12 2012 - unable to load app 0 (mountpoint='') (callable not found or import error)
Traceback (most recent call last):
File "/home/dotcloud/current/wsgi.py", line 15, in <module>
newrelic.agent.initialize(NEWRELIC_CONFIG, ENVIRONMENT)
File "/opt/ve/2.7/local/lib/python2.7/site-packages/newrelic-1.8.0.13/newrelic/config.py", line 1414, in initialize
log_file, log_level)
File "/opt/ve/2.7/local/lib/python2.7/site-packages/newrelic-1.8.0.13/newrelic/config.py", line 340, in _load_configuration
'environment "%s".' % (_config_file, _environment))
newrelic.api.exceptions.ConfigurationError: Configuration has already been done against differing configuration file or environment. Prior configuration file used was "/home/dotcloud/current/newrelic.ini" and environment "None".
So it would seem that the newrelic agent is being initialised before uwsgi.py is called.
So my question is:
Is there a way to initialise the newrelic environment?
The easiest way to do this, without changing any code would be to do the following.
Create a new sandbox app on dotCloud (see http://docs.dotcloud.com/0.9/guides/flavors/ for more information about creating apps in sandbox mode)
$ dotcloud create -f sandbox <app_name>
Deploy your code to the new sandbox app.
$ dotcloud push
Now you should have the same code running in both your live and sandbox apps. But because you want to change some of the ENV variables for the sandbox app, you need to do one more step.
According to this page http://docs.dotcloud.com/0.9/guides/environment/#adding-environment-variables there are 2 different ways of adding ENV variables.
Using the dotcloud.yml's environment section.
Using the dotcloud env cli command
Whereas dotcloud.yml allows you to define different environment variables for each service, dotcloud env set environment variables for the whole application. Moreover, environment variables set with dotcloud env supersede environment variables defined in dotcloud.yml.
That means that if we want to have different values for our sandbox app, we just need to run a dotcloud env command to set those variables on the sandbox app, which will override the ones in your dotcloud.yml
If we just want to change on variable we would run this command.
$ dotcloud env set NEW_RELIC_APP_NAME='Test Application Name'
If we want to update more then one at a time we would do the following.
$ dotcloud env set \
'NEW_RELIC_APP_NAME="Test Application Name"' \
'NEW_RELIC_LOG_LEVEL=debug'
To make sure that you have your env varibles set correctly you can run the following command.
$ dotcloud env list
Notes
The commands above, are using the new dotCloud 0.9.x CLI, if you are using the older one, you will need to either upgrade to the new one, or refer to the documentation for the old CLI http://docs.dotcloud.com/0.4/guides/environment/
When you set your environment variables it will restart your application so that it can install the variables, so to limit your downtime, set all of them in one command.
Unless they are doing something odd, you should be able to override the app_name supplied by the agent configuration file by doing:
import newrelic.agent
newrelic.agent.global_settings().app_name = 'Test Application Name'
Don't call newrelic.agent.initialize() a second time.
This will only work if app_name is listing a single application to report data to.

Categories