How gunicorn and shared variables work - python

I have a class that I instantiate in a request (it's a ML model that loads and takes a bit of time to configure on startup). The idea is to only do that once and have each request use the model for predictions. Does gunicorn instantiate the app every time?
Aka, will the model retrain every time a new request comes in?

It sounds like you could benefit from the application preloading:
http://docs.gunicorn.org/en/stable/settings.html#preload-app
This will let you load app code before spinning off your workers.

For those who are looking for how to share a variable between gunicorn workers without using Redis or Session, here is a good alternative with the awesome python dotenv:
The principle is to read and write shared variables from a file that could be done with open() but dotenv is perfect in this situation.
pip install python-dotenv
In app directory, create .env file:
├── .env
└── app.py
.env:
var1="value1"
var2="value2"
app.py: # flask app
from flask import Flask
import os
from dotenv import load_dotenv
app = Flask( __name__ )
# define the path explicitly if not in same folder
#env_path = os.path.dirname(os.path.realpath(__file__)) +'/../.env'
#load_dotenv(dotenv_path=env_path) # you may need a first load
def getdotenv(env):
try:
#global env_path
#load_dotenv(dotenv_path=env_path,override=True)
load_dotenv(override=True)
val = os.getenv(env)
return val
except :
return None
def setdotenv(key, value): # string
global env_path
if key :
if not value:
value = '\'\''
cmd = 'dotenv -f '+env_path+' set -- '+key+' '+value # set env variable
os.system(cmd)
#app.route('/get')
def index():
var1 = getdotenv('var1') # retreive value of variable var1
return var1
#app.route('/set')
def update():
setdotenv('newValue2') # set variable var2='newValue2'

Related

How to load environment variables from file using python-dotenv before loading pytest scritps?

I have tests which on inside docker container with https://github.com/pytest-docker-compose/pytest-docker-compose, but it does take too long for the container to start up/shutdown. Then, I would like to let docker-compose run tests only CI machine or when need.
For this, I used this way of defining tests on simple_test_runner.py:
import os
THIS_FOLDER = os.path.dirname(os.path.realpath(__file__))
RUN_TESTS_LOCALLY = os.environ.get('RUN_TESTS_LOCALLY')
if RUN_TESTS_LOCALLY:
def test_simple(run_process, load_env):
output, returncode = run_process("python3 " + THIS_FOLDER + "/simple_test.py")
assert returncode == 0
else:
def test_simple(function_scoped_container_getter):
container = function_scoped_container_getter.get("container_name")
exec = container.create_exec("python3 /simple_test.py")
...
This works file if I export RUN_TESTS_LOCALLY=1 before calling pytest -vs ., but if I try to use https://github.com/theskumar/python-dotenv in my conftest.py:
from dotenv import load_dotenv
#pytest.fixture(scope='session', autouse=True)
def load_env():
print("Loading .env file")
load_dotenv()
The environment variables are only loaded after python test loaded the simple_test_runner.py. Then, my tests are always running inside docker instead of outside it when RUN_TESTS_LOCALLY=1 is defined.
How can I make pytest to call load_dotenv() before my switch if RUN_TESTS_LOCALLY: is evaluated inside my tests?
Or do you know an alternative to if RUN_TESTS_LOCALLY: which allows me to use load_dotenv() and switch between running my tests inside docker-compose or not?
Related:
https://github.com/quiqua/pytest-dotenv
How to load variables from .env file for pytests
pytest -- how do I use global / session-wide fixtures?
The code I originally posted is working:
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
file = THIS_FOLDER + '/.env'
if request.config.getoption("verbose") > 0:
print("Loading", file, file=sys.stderr)
load_dotenv(dotenv_path=file)
The problem was that when I had tested, I had set my environment variable to RUN_TESTS_LOCALLY= (the empty string). This was causing dotenv to not override my environment variable. Once I unset the variable with bash unset RUN_TESTS_LOCALLY, dotenv was finally loading the environment file correctly.

Can not configure a Starlette .env file

I'm trying to set up the .env file for my project. But it seems incorrect.
I store the .env file in the same folder as the config.py file as below.
|__run.py
|___|myproject
|__config.py
|__.env
Code in my config.py file:
from starlette.config import Config
from starlette.datastructures import Secret, CommaSeparatedStrings
config = Config(".env")
BACKHUG_JWT_AES_KEY = config('BACKHUG_JWT_AES_KEY', default=None)
print(type(BACKHUG_JWT_AES_KEY))
print(BACKHUG_JWT_AES_KEY)
Data in the .env file:
BACKHUG_JWT_AES_KEY="SAMPLE_AES_KEY"
But the result I got was:
<class 'NoneType'>
None
I don't know why it got the None object. How can I fix it?
I run my project from a run.py file.
Code in the run.py file:
import uvicorn
if __name__ == "__main__":
uvicorn.run("myproject.main:app", host="0.0.0.0", port=8888, reload=True)
There is a better way!
Use FastAPI and Pydantic for this. Pydantic provides a great BaseSettings class. Also, we have great documentation for settings and environment variables.
Create a Settings class by inheriting from Pydantic's BaseSettings:
from pydantic import BaseSettings
class Settings(BaseSettings):
backhug_jwt_access_key: str
class Config:
env_file = "myproject/.env"
This Settings class automatically reads the variables from the .env file. Then from your main file you can use it like this:
from . import config
from functools import lru_cache
from fastapi import Depends, FastAPI
app = FastAPI()
#lru_cache()
def get_settings():
return config.Settings()
#app.get("/info")
async def info(settings: config.Settings = Depends(get_settings)):
return {"jwt_key": settings.backhug_jwt_access_key}
Remove the double quotes in the value and try again:
env file:
BACKHUG_JWT_AES_KEY=SAMPLE_AES_KEY
I'm pretty sure this is an issue with the current working directory. I can suggest two ways.
Move the .env file to one directory with the run.py file
Or
Change the path config = Config("myproject/.env")

How to load variables from .env file for pytests

I am writing an API in Flask and in some point I send email to users who register. I store variables concerning this email service in .env file. Now want to test a piece where I use these variables, but I have no idea how to load them from the .env file.
I tried basically all the answers here https://rb.gy/0nro1a, monkey patching setenv as show here https://rb.gy/kd07wa + other tips here and there. Each failed on some point. I also tried using pytest-dotenv. pytest-env, pytest.ini etc..but nothing really worked as expected, and it is all pretty confusing to me.
My pytests fixture looks like this
#pytest.fixture(autouse=True)
def test_client_db():
# set up
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
app.config["JWT_SECRET_KEY"] = "testing"
with app.app_context():
db.init_app(app)
db.create_all()
testing_client = app.test_client()
ctx = app.app_context()
ctx.push()
# do testing
yield testing_client
# tear down
with app.app_context():
db.session.remove()
db.drop_all()
ctx.pop()
I am wondering why I cant just simply load the .env file with a line like this load_dotenv(path/to/.env) somewhere in the set up of the fixture and be done?
Can someone explain to me as a newbie how to read the .env variables in a simple straightforward way to work with pytest?
The only way that actually works for me is to pass the environment variables on the command line as I run the tests.
FROM_EMAIL="some#email.com" MAILGUN_DOMAIN="sandbox6420919ab29b4228sdfda9d43ff37f7689072.mailgun.org" MAILGUN_API_KEY="245d6d0asldlasdkjfc380fba7fbskfsj1ad3125649esadbf2-7cd1ac2b-47fb3ac2" pytest tests
But this is a terrible way and I don't want to write all these var into the command line every time I run tests.
I just want to write pytest test, the .env file should be loaded somewhere automatically I believe. But where and how?
Any help appreciated.
If you install python-dotenv, you can use that to load the variables from the .env file. Here is a minimal example:
.env
SQLALCHEMY_DATABASE_URI="sqlite:///"
JWT_SECRET_KEY="testing"
test.py
import os
import pytest
from dotenv import load_dotenv
#pytest.fixture(scope='session', autouse=True)
def load_env():
load_dotenv()
#pytest.fixture(autouse=True)
def test_client_db():
print(f"\nSQLALCHEMY_DATABASE_URI"
f"={os.environ.get('SQLALCHEMY_DATABASE_URI')}")
print(f"JWT_SECRET_KEY={os.environ.get('JWT_SECRET_KEY')}")
def test():
pass
python -m pytest -s test.py gives:
============================================ test session starts ============================================
...
collected 1 item
test.py
SQLALCHEMY_DATABASE_URI=sqlite:///
JWT_SECRET_KEY=testing
.
============================================= 1 passed in 0.27s =============================================
e.g. the enviroment variables are set throughout the test session and can be used to configure your app. Note that I didn't provide a path in load_dotenv(), because I put the .env file in the same directory as the test - in your code, you probably have to add the path (load_dotenv(dotenv_path=your_path)).
You can use the monkeypatch fixture provided by pytest to manipulate env variables:
#pytest.fixture(scope="function")
def configured_env(monkeypatch):
monkeypatch.setenv("SQLALCHEMY_DATABASE_URI", "sqlite:///")
monkeypatch.setenv("JWT_SECRET_KEY", testing)
def test_client(configured_env):
#env variables are set here

How to access an AWS Lambda environment variable from Python

Using the new environment variable support in AWS Lambda, I've added an env var via the webui for my function.
How do I access this from Python? I tried:
import os
MY_ENV_VAR = os.environ['MY_ENV_VAR']
but my function stopped working (if I hard code the relevant value for MY_ENV_VAR it works fine).
AWS Lambda environment variables can be defined using the AWS Console, CLI, or SDKs. This is how you would define an AWS Lambda that uses an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Once created, environment variables can be read using the support your language provides for accessing the environment, e.g. using process.env for Node.js. When using Python, you would need to import the os library, like in the following example:
...
import os
...
print("environment variable: " + os.environ['variable'])
Resource Link:
AWS Lambda Now Supports Environment Variables
Assuming you have created the .env file along-side your settings module.
.
├── .env
└── settings.py
Add the following code to your settings.py
# settings.py
from os.path import join, dirname
from dotenv import load_dotenv
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
Alternatively, you can use find_dotenv() method that will try to find a .env file by (a) guessing where to start using file or the working directory -- allowing this to work in non-file contexts such as IPython notebooks and the REPL, and then (b) walking up the directory tree looking for the specified file -- called .env by default.
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
Now, you can access the variables either from system environment variable or loaded from .env file.
Resource Link:
https://github.com/theskumar/python-dotenv
gepoggio answered in this post: https://github.com/serverless/serverless/issues/577#issuecomment-192781002
A workaround is to use python-dotenv:
https://github.com/theskumar/python-dotenv
import os
import dotenv
dotenv.load_dotenv(os.path.join(here, "../.env"))
dotenv.load_dotenv(os.path.join(here, "../../.env"))
It tries to load it twice because when ran locally it's in
project/.env and when running un Lambda the .env is located in
project/component/.env
Both
import os
os.getenv('MY_ENV_VAR')
And
os.environ['MY_ENV_VAR']
are feasible solutions, just make sure in the lambda GUI that the ENV variables are actually there.
I used this code; it includes both cases, setting the variable from the handler and setting it from outside the handler.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Trying new lambda stuff"""
import os
import configparser
class BqEnv(object):
"""Env and self variables settings"""
def __init__(self, service_account, configfile=None):
config = self.parseconfig(configfile)
self.env = config
self.service_account = service_account
#staticmethod
def parseconfig(configfile):
"""Connection and conf parser"""
config = configparser.ConfigParser()
config.read(configfile)
env = config.get('BigQuery', 'env')
return env
def variable_tests(self):
"""Trying conf as a lambda variable"""
my_env_var = os.environ['MY_ENV_VAR']
print my_env_var
print self.env
return True
def lambda_handler(event, context):
"""Trying env variables."""
print event
configfile = os.environ['CONFIG_FILE']
print configfile
print type(str(configfile))
bqm = BqEnv('some-json.json', configfile)
bqm.variable_tests()
return True
I tried this with a demo config file that has this:
[BigQuery]
env = prod
And the setting on lambda was the following:
Hope this can help!
os.environ["variable_name"]
In the configuration section of AWS lambda, make sure you declare the variable with the same name that you're trying to access here. For this example, it should be variable_name

Flask: app.config settings from .env &. flaskenv in mod_wsgi

I have spent quite a while trying to figure out how to set .env and .flaskenv configuration values in my flask backend in Google Cloud Platform server. I am using apache2, mod_wsgi, Flask, Python 3.6 and SQLAlchemy. My backend works fine locally on my Mac using pure Flask.
Having python-dotenv installed, running the flask command will set environment variables defined in the files .env and .flaskenv. This, however, does not work with wsgi. The request from apache is redirected to execute my run.wsgi-file. There is no mechanism (that I have knowledge about) to set the environment variables defined in .env and .flaskenv.
The minimun requirement is to pass to the application information if test or development environment should be used. From there I could within init.py populate app.config values from an object. However, being somehow able to use config-values from .env and .flaskenv would be far better. I would really appreciate if somebody had any good ideas here - the best practice to set app.config values in wsgi environment.
There are two posts where this same problem has been presented - they really do not have a best practice how to tackle this challenge (and I am sure I am not the only one having a hard time with this):
Why can't Flask can't see my environment variables from Apache (mod_wsgi)?
Apache SetEnv not working as expected with mod_wsgi
My run.wsgi:
import sys
sys.path.append("/var/www/contacts-api/venv/lib/python3.6/site-packages")
sys.path.insert(0,"/var/www/contacts-api/")
from contacts import create_app
app = create_app('settings.py')
app.run()
[3]:Allows you to configure an application using pre-set methods.
from flask_appconfig import AppConfig
def create_app(configfile=None):
app = Flask('myapp')
AppConfig(app, configfile)
return app
The application returned by create_app will, in order:
Load default settings from a module called myapp.default_config, if it exists. (method described in http://flask.pocoo.org/docs/config/#configuring-from-files )
Load settings from a configuration file whose name is given in the environment variable MYAPP_CONFIG (see link from 1.).
Load json or string values directly from environment variables that start with a prefix of MYAPP_, i.e. setting MYAPP_SQLALCHEMY_ECHO=true will cause the setting of SQLALCHEMY_ECHO to be True.
Any of these behaviors can be altered or disabled by passing the appropriate options to the constructor or init_app().
[4]: Using “ENV-only”
If you only want to use the environment-parsing functions of Flask-AppConfig, the appropriate functions are exposed:
from flask_appconfig.heroku import from_heroku_envvars
from flask_appconfig.env import from_envvars
# from environment variables. note that you need to set the prefix, as
# no auto-detection can be done without an app object
from_envvars(app.config, prefix=app.name.upper() + '_')
# also possible: parse heroku configuration values
# any dict-like object will do as the first parameter
from_heroku_envvars(app.config)
After reading more about this and trying many different things. I reached to the conclusion that there is no reasonable way for configuring a Flask-application using .env- and .flaskenv -files. I ended up using a method presented in Configuration Handling which enables managing development/testing/production-environments in a reasonable manner:
app = Flask(__name__)
app.config.from_object('yourapplication.default_settings')
app.config.from_envvar('YOURAPPLICATION_SETTINGS')
My run.wsgi (being used in google cloud platform compute instance):
import sys
import os
from contacts import create_app
sys.path.append("/var/www/myapp/venv/lib/python3.6/site-packages")
sys.path.insert(0,"/var/www/myapp/")
os.environ['SETTINGS_PLATFORM_SPECIFIC'] = "/path/settings_platform_specific.py"
os.environ['CONFIG_ENVIRONMENT'] = 'DevelopmentConfig'
app = create_app(')
app.run()
Locally on my mac I use run.py (for flask run):
import os
from contacts import create_app
os.environ['SETTINGS_PLATFORM_SPECIFIC'] ="/path/settings_platform_specific.py"
os.environ['CONFIG_ENVIRONMENT'] = 'DevelopmentConfig'
if __name__ == '__main__':
app = create_app()
app.run()
For app creation init.py
def create_app():
app = Flask(__name__, template_folder='templates')
app.config.from_object(f'contacts.settings_common.{os.environ.get("CONFIG_ENVIRONMENT")}')
app.config.from_envvar('SETTINGS_PLATFORM_SPECIFIC')
db.init_app(app)
babel.init_app(app)
mail.init_app(app)
bcrypt.init_app(app)
app.register_blueprint(routes)
create_db(app)
return app
At this point it looks like this works out fine for my purposes. The most important thing is that I can easily manage different environments and deploy the backend service to google platform using git.
I was wrestling with the same conundrum, wanting to use the same .env file I was using with docker-compose while developing with flask run. I ended up using python-dotenv, like so:
In .env:
DEBUG=True
APPLICATION_ROOT=${PWD}
In config.py:
import os
from dotenv import load_dotenv
load_dotenv()
class Config(object):
SECRET_KEY = os.getenv('SECRET_KEY') or 'development-secret'
DEBUG = os.getenv("DEBUG") or False
APPLICATION_ROOT = os.getenv("APPLICATION_ROOT") or os.getcwd()
I haven't experimented with it yet, but I may also give flask-env a try in combination with this solution.
The easy to go will be using load_dotenv() and from_mapping
from flask import Flask
from dotenv import load_dotenv , dotenv_values
load_dotenv()
app = Flask(__name__)
config = dotenv_values()
app.config.from_mapping(config)

Categories