Docker flask application environment variables - python

I'm starting a docker container the following way:
docker run -e IP_AD=192.168.99.100 -p 80:80 flask_app
I'm simply trying to pass an IP Address to the flask application so that something can be loaded from my application. This resource will change from environment to environment, so this is the reason I would like to pass it as an environment variable.
Later, I would like to use this variable but from the context of the running flask application. How can I load IP_AD from my flask application and use it as a python variable?
I've tried doing this:
import os
os.environ.get('IP_AD')
But it does not seem to be loading anything. What is the correct way to load IP_AD passed from docker run -e

create file .env like this
model_path=./model/data_fs.csv
then install library .env by doing
pip3 install python-dotenv==0.17.1
add this to your code to load all your .env variable
from dotenv import load_dotenv
load_dotenv()
so you can access it
if os.getenv("model_path", None) is not None:
return os.getenv("model_path", None)
# otherwise raise an exception
raise Exception("no .env file found")

you can try like this
import os
os.environ["IP_AD"]

Related

Flask environment not changing to development

Whenever running my Flask app on VSCode I am unable to change the environment to development. I have searched relevant links on this topic and tried all the solutions and combinations of solutions, nothing so far has worked. I would like my app to refresh whenever I make a change in either any of the template files or my app.py file. Here is a list of what I've tried:
Typing in "set FLASK_ENV=development" on my terminal from this stackoverflow solution I am on Windows, so I instead used set
Pip installing python-dotenv and creating a .env or .flaskenv. file inside my root directory (yes I tried both) then adding FLASK_ENV=development as well as set FLASK_ENV=development inside both files (at different times trying all combinations) like this stackoverflow solution mentioned
Editing my main flask app file with the lines mentioned in the solution from list item number two...
if __name__ == '__main__':
app.run(debug=True)
Yes I have read the documentation for python-dotenv here and added load_dotenv() after my import statements
See here start of my app.py code:
"""This is my controller"""
import re
from flask import Flask, render_template, request, redirect, session
from cs50 import SQL
from dotenv import load_dotenv
load_dotenv()
from flask_sqlalchemy import SQLAlchemy
# turn the current file into a web application that will listen for browsers requests
# __name__ refers to the current file
app = Flask(__name__)
if __name__ == '__main__':
app.run(debug=True)
And as you can see Environment: production still shows from terminal window
With powershell the syntax for env vars is different. To change the FLASK_ENV variable, type:
$env:FLASK_ENV = "development"
You can verify it with gci env: command that list all of the env vars. After that the flask run will run in development mode.
I discovered that for some reason Flask uses ENV variable instead of FLASK_ENV.
Try setting ENV=development in your .env file, it worked for me.

Heroku Python Local Environment Variables

Working on a heroku django project.
Goal: I want to run the server locally with same code on cloud too (makes sense).
Problem:
Environments differ from a linux server (heroku) to a local PC windows system.
Local environment variables differ from cloud Heroku config vars.
Heroku config vars can be setup easily using the CLI heroku config:set TIMES=2.
While setting up local env vars is a total mess.
I tried the following in cmd:
py -c "import os;os.environ['Times']=2" # To set an env var
Then ran py -c "import os;os.environ.get('Times','Not Found')" stdout: "Not Found".
After a bit of research it appeared to be that such env vars are stored temporarily per process/session usage.
Solution theory: Redirect os.environ to .env file of the root heroku project instead of the PC env vars. So I found this tool direnv perfect for Unix-like OSs but not available for Windows.
views.py code (runs perfect on cloud, sick on the local machine):
import os
import requests
from django.shortcuts import render
from django.http import HttpResponse
from .models import Greeting
def index(request):
# get method takes 2 parameters (env_var_string,return value if var is not found)
times = int(os.environ.get('TIMES',3))
return HttpResponse('<p>'+ 'Hello! ' * times+ '</p>')
def db(request):
greeting = Greeting()
greeting.save()
greetings = Greeting.objects.all()
return render(request, "db.html", {"greetings": greetings})
Main Question: Is there a proper way to hide secrets locally in windows and access them by os.environ['KEY']?
Another solution theory: I was wondering if a python virtual environment has it's own environment variables. If yes i activate a venv locally without affecting the cloud. Therefore os.environ['KEY'] is redirected to the venv variables. Again it's just a theory.
You can use environment variables which you can get via os.environ['KEY'].
The same code will work on both local development and on Heroku.
On Heroku define these variables using ConfigVars heroku config:set KEY=val while locally (on Windows for example) define the same variables in an .env file (use dotenv to load them). The .env file is never committed with the source code.

How to test dockerized flask app using Pytest?

I've built the flask app that designed to run inside docker container. It will
accept POST HTTP Methods and return appropriate JSON response if the header key matched with the key that I put inside docker-compose environment.
...
environment:
- SECRET_KEY=fakekey123
...
The problem is: when it comes to testing. The app or the client fixture of
flask (pytest) of course can't find the docker-compose environment. Cause the app didn't start from docker-compose but from pytest.
secret_key = os.environ.get("SECRET_KEY")
# ^^ the key loaded to OS env by docker-compose
post_key = headers.get("X-Secret-Key")
...
if post_key == secret_key:
RETURN APPROPRIATE RESPONSE
.....
What is the (best/recommended) approach to this problem?
I find some plugins
one,
two,
three to do this. But I asked here if there is any more "simple"/"common" approach. Cause I also want to automate this test using CI/CD tools.
You most likely need to run py.test from inside of your container. If you are running locally, then there's going to be a conflict between what your host machine is seeing and what your container is seeing.
So option #1 would be to using docker exec:
$ docker exec -it $containerid py.test
Then option #2 would be to create a script or task in your setup.py so that you can run a simpler command like:
$ python setup.py test
My current solution is to mock the function that read the OS environment. OS ENV is loaded if the app started using docker. In order to make it easy for the test, I just mock that function.
def fake_secret_key(self):
return "ffakefake11"
def test_app(self, client):
app.secret_key = self.fake_secret_key
# ^^ real func ^^ fake func
Or another alternative is using pytest-env as #bufh suggested in comment.
Create pytest.ini file, then put:
[pytest]
env =
APP_KEY=ffakefake11

Why does gunicorn not see the corrent environment variables?

On my production server, I've set environment variables both inside and outside my virtualenv (only because I don't understand this issue going on) including a variable HELLO_WORLD_PROD which I've set to '1'. in the python interpreter, both inside and out my venv, os.environ.get('HELLO_WORLD_PROD') == '1' returns True. In my settings folder, I have:
import os
if os.environ.get('HELLO_WORLD_PROD') == '1':
from hello_world.settings.prod import * # noqa
else:
from hello_world.settings.dev import * # noqa
Both prod.py and dev.py inherit from base.py, and in base DEBUG = False, and only in dev.py does DEBUG = True.
However, when I trigger an error through the browser, I'm seeing the debug page.
I'm using nginx and gunicorn. Why is my application importing the wrong settings file?
You can see my gunicorn conf here
Thanks in advance for your patience!
I was using sudo service gunicorn start to run gunicorn. The problem is service strips all environment variables but TERM, PATH and LANG. To fix it, in my exec line in my gunicorn.conf I added the environment variables there using the --env flag, like exec env/bin/gunicorn --env HELLO_WORLD_PROD=1 --env DB_PASSWORD=secret etc.

Unable to get environment variable

In my __init__.py file I am trying to set:
import os
app.config['SECRET_KEY'] = os.environ['MY_KEY']
but am getting the error:
raise KeyError(key)
KeyError: 'MY_KEY'
When I run printenv, the variable MY_KEY is present.
Also in IDLE I tried running:
import os
print os.environ['MY_KEY']
and I get the correct output.
I set MY_KEY in /etc/profile using:
export MY_KEY="1234example_secret_key"
I did restart my computer after making the change to the profile file.
Would anyone know what the issue may be?
Thanks for your help.
If you are running your process under supervisor and/or gunicorn (judging but you comments you are) you can user supervisor environment config param.
[program:my_app]
...
environment = MY_KEY="ABCD",MY_KEY2="EFG"
You can also use gunicorn --env flag or env config file param.
gunicorn -b 127.0.0.1:8000 --env MY_KEY=ABCD test:app
The downside of the 2nd option is that your key will be visible to anyone who has access to your machine.
The best approach is to use the app.config.from_envvar function and store your config into machine specific config settings location (this could be on encrypted filesystem). In this case your code would look like this:
app = Flask(__name__)
...
app.config.from_envvar('MACHINE_SPECIFIC_SETTINGS')
Your MACHINE_SPECIFIC_SETTINGS env variable could point to the file that will have the MY_KEY value.
MACHINE_SPECIFIC_SETTINGS=/path/to/config.py

Categories