How to access an AWS Lambda environment variable from Python - python

Using the new environment variable support in AWS Lambda, I've added an env var via the webui for my function.
How do I access this from Python? I tried:
import os
MY_ENV_VAR = os.environ['MY_ENV_VAR']
but my function stopped working (if I hard code the relevant value for MY_ENV_VAR it works fine).

AWS Lambda environment variables can be defined using the AWS Console, CLI, or SDKs. This is how you would define an AWS Lambda that uses an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Once created, environment variables can be read using the support your language provides for accessing the environment, e.g. using process.env for Node.js. When using Python, you would need to import the os library, like in the following example:
...
import os
...
print("environment variable: " + os.environ['variable'])
Resource Link:
AWS Lambda Now Supports Environment Variables
Assuming you have created the .env file along-side your settings module.
.
├── .env
└── settings.py
Add the following code to your settings.py
# settings.py
from os.path import join, dirname
from dotenv import load_dotenv
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
Alternatively, you can use find_dotenv() method that will try to find a .env file by (a) guessing where to start using file or the working directory -- allowing this to work in non-file contexts such as IPython notebooks and the REPL, and then (b) walking up the directory tree looking for the specified file -- called .env by default.
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
Now, you can access the variables either from system environment variable or loaded from .env file.
Resource Link:
https://github.com/theskumar/python-dotenv
gepoggio answered in this post: https://github.com/serverless/serverless/issues/577#issuecomment-192781002
A workaround is to use python-dotenv:
https://github.com/theskumar/python-dotenv
import os
import dotenv
dotenv.load_dotenv(os.path.join(here, "../.env"))
dotenv.load_dotenv(os.path.join(here, "../../.env"))
It tries to load it twice because when ran locally it's in
project/.env and when running un Lambda the .env is located in
project/component/.env

Both
import os
os.getenv('MY_ENV_VAR')
And
os.environ['MY_ENV_VAR']
are feasible solutions, just make sure in the lambda GUI that the ENV variables are actually there.

I used this code; it includes both cases, setting the variable from the handler and setting it from outside the handler.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Trying new lambda stuff"""
import os
import configparser
class BqEnv(object):
"""Env and self variables settings"""
def __init__(self, service_account, configfile=None):
config = self.parseconfig(configfile)
self.env = config
self.service_account = service_account
#staticmethod
def parseconfig(configfile):
"""Connection and conf parser"""
config = configparser.ConfigParser()
config.read(configfile)
env = config.get('BigQuery', 'env')
return env
def variable_tests(self):
"""Trying conf as a lambda variable"""
my_env_var = os.environ['MY_ENV_VAR']
print my_env_var
print self.env
return True
def lambda_handler(event, context):
"""Trying env variables."""
print event
configfile = os.environ['CONFIG_FILE']
print configfile
print type(str(configfile))
bqm = BqEnv('some-json.json', configfile)
bqm.variable_tests()
return True
I tried this with a demo config file that has this:
[BigQuery]
env = prod
And the setting on lambda was the following:
Hope this can help!

os.environ["variable_name"]
In the configuration section of AWS lambda, make sure you declare the variable with the same name that you're trying to access here. For this example, it should be variable_name

Related

trying to access environmental variables using os library python [duplicate]

How can I get the value of an environment variable in Python?
Environment variables are accessed through os.environ:
import os
print(os.environ['HOME'])
To see a list of all environment variables:
print(os.environ)
If a key is not present, attempting to access it will raise a KeyError. To avoid this:
# Returns `None` if the key doesn't exist
print(os.environ.get('KEY_THAT_MIGHT_EXIST'))
# Returns `default_value` if the key doesn't exist
print(os.environ.get('KEY_THAT_MIGHT_EXIST', default_value))
# Returns `default_value` if the key doesn't exist
print(os.getenv('KEY_THAT_MIGHT_EXIST', default_value))
To check if the key exists (returns True or False)
'HOME' in os.environ
You can also use get() when printing the key; useful if you want to use a default.
print(os.environ.get('HOME', '/home/username/'))
where /home/username/ is the default
Here's how to check if $FOO is set:
try:
os.environ["FOO"]
except KeyError:
print "Please set the environment variable FOO"
sys.exit(1)
Actually it can be done this way:
import os
for item, value in os.environ.items():
print('{}: {}'.format(item, value))
Or simply:
for i, j in os.environ.items():
print(i, j)
For viewing the value in the parameter:
print(os.environ['HOME'])
Or:
print(os.environ.get('HOME'))
To set the value:
os.environ['HOME'] = '/new/value'
You can access the environment variables using
import os
print os.environ
Try to see the content of the PYTHONPATH or PYTHONHOME environment variables. Maybe this will be helpful for your second question.
As for the environment variables:
import os
print os.environ["HOME"]
Import the os module:
import os
To get an environment variable:
os.environ.get('Env_var')
To set an environment variable:
# Set environment variables
os.environ['Env_var'] = 'Some Value'
import os
for a in os.environ:
print('Var: ', a, 'Value: ', os.getenv(a))
print("all done")
That will print all of the environment variables along with their values.
If you are planning to use the code in a production web application code, using any web framework like Django and Flask, use projects like envparse. Using it, you can read the value as your defined type.
from envparse import env
# will read WHITE_LIST=hello,world,hi to white_list = ["hello", "world", "hi"]
white_list = env.list("WHITE_LIST", default=[])
# Perfect for reading boolean
DEBUG = env.bool("DEBUG", default=False)
NOTE: kennethreitz's autoenv is a recommended tool for making project-specific environment variables. For those who are using autoenv, please note to keep the .env file private (inaccessible to public).
There are also a number of great libraries. Envs, for example, will allow you to parse objects out of your environment variables, which is rad. For example:
from envs import env
env('SECRET_KEY') # 'your_secret_key_here'
env('SERVER_NAMES',var_type='list') #['your', 'list', 'here']
You can also try this:
First, install python-decouple
pip install python-decouple
Import it in your file
from decouple import config
Then get the environment variable
SECRET_KEY=config('SECRET_KEY')
Read more about the Python library here.
Edited - October 2021
Following #Peter's comment, here's how you can test it:
main.py
#!/usr/bin/env python
from os import environ
# Initialize variables
num_of_vars = 50
for i in range(1, num_of_vars):
environ[f"_BENCHMARK_{i}"] = f"BENCHMARK VALUE {i}"
def stopwatch(repeat=1, autorun=True):
"""
Source: https://stackoverflow.com/a/68660080/5285732
stopwatch decorator to calculate the total time of a function
"""
import timeit
import functools
def outer_func(func):
#functools.wraps(func)
def time_func(*args, **kwargs):
t1 = timeit.default_timer()
for _ in range(repeat):
r = func(*args, **kwargs)
t2 = timeit.default_timer()
print(f"Function={func.__name__}, Time={t2 - t1}")
return r
if autorun:
try:
time_func()
except TypeError:
raise Exception(f"{time_func.__name__}: autorun only works with no parameters, you may want to use #stopwatch(autorun=False)") from None
return time_func
if callable(repeat):
func = repeat
repeat = 1
return outer_func(func)
return outer_func
#stopwatch(repeat=10000)
def using_environ():
for item in environ:
pass
#stopwatch
def using_dict(repeat=10000):
env_vars_dict = dict(environ)
for item in env_vars_dict:
pass
python "main.py"
# Output
Function=using_environ, Time=0.216224731
Function=using_dict, Time=0.00014206099999999888
If this is true ... It's 1500x faster to use a dict() instead of accessing environ directly.
A performance-driven approach - calling environ is expensive, so it's better to call it once and save it to a dictionary. Full example:
from os import environ
# Slower
print(environ["USER"], environ["NAME"])
# Faster
env_dict = dict(environ)
print(env_dict["USER"], env_dict["NAME"])
P.S- if you worry about exposing private environment variables, then sanitize env_dict after the assignment.
For Django, see Django-environ.
$ pip install django-environ
import environ
env = environ.Env(
# set casting, default value
DEBUG=(bool, False)
)
# reading .env file
environ.Env.read_env()
# False if not in os.environ
DEBUG = env('DEBUG')
# Raises Django's ImproperlyConfigured exception if SECRET_KEY not in os.environ
SECRET_KEY = env('SECRET_KEY')
You should first import os using
import os
and then actually print the environment variable value
print(os.environ['yourvariable'])
of course, replace yourvariable as the variable you want to access.
The tricky part of using nested for-loops in one-liners is that you have to use list comprehension. So in order to print all your environment variables, without having to import a foreign library, you can use:
python -c "import os;L=[f'{k}={v}' for k,v in os.environ.items()]; print('\n'.join(L))"
You can use python-dotenv module to access environment variables
Install the module using:
pip install python-dotenv
Then import the module into your Python file
import os
from dotenv import load_dotenv
# Load the environment variables
load_dotenv()
# Access the environment variable
print(os.getenv("BASE_URL"))

How to load environment variables from file using python-dotenv before loading pytest scritps?

I have tests which on inside docker container with https://github.com/pytest-docker-compose/pytest-docker-compose, but it does take too long for the container to start up/shutdown. Then, I would like to let docker-compose run tests only CI machine or when need.
For this, I used this way of defining tests on simple_test_runner.py:
import os
THIS_FOLDER = os.path.dirname(os.path.realpath(__file__))
RUN_TESTS_LOCALLY = os.environ.get('RUN_TESTS_LOCALLY')
if RUN_TESTS_LOCALLY:
def test_simple(run_process, load_env):
output, returncode = run_process("python3 " + THIS_FOLDER + "/simple_test.py")
assert returncode == 0
else:
def test_simple(function_scoped_container_getter):
container = function_scoped_container_getter.get("container_name")
exec = container.create_exec("python3 /simple_test.py")
...
This works file if I export RUN_TESTS_LOCALLY=1 before calling pytest -vs ., but if I try to use https://github.com/theskumar/python-dotenv in my conftest.py:
from dotenv import load_dotenv
#pytest.fixture(scope='session', autouse=True)
def load_env():
print("Loading .env file")
load_dotenv()
The environment variables are only loaded after python test loaded the simple_test_runner.py. Then, my tests are always running inside docker instead of outside it when RUN_TESTS_LOCALLY=1 is defined.
How can I make pytest to call load_dotenv() before my switch if RUN_TESTS_LOCALLY: is evaluated inside my tests?
Or do you know an alternative to if RUN_TESTS_LOCALLY: which allows me to use load_dotenv() and switch between running my tests inside docker-compose or not?
Related:
https://github.com/quiqua/pytest-dotenv
How to load variables from .env file for pytests
pytest -- how do I use global / session-wide fixtures?
The code I originally posted is working:
#pytest.fixture(scope='session', autouse=True)
def load_env(request):
file = THIS_FOLDER + '/.env'
if request.config.getoption("verbose") > 0:
print("Loading", file, file=sys.stderr)
load_dotenv(dotenv_path=file)
The problem was that when I had tested, I had set my environment variable to RUN_TESTS_LOCALLY= (the empty string). This was causing dotenv to not override my environment variable. Once I unset the variable with bash unset RUN_TESTS_LOCALLY, dotenv was finally loading the environment file correctly.

calling settings.py from another python scripts

I have .env file where I have added environment settings. I wrote "settings.py" which reads .env file and stores values of settings. I want to import settings.py from other_script.py. But I am getting None as value.
I tried to execute "settings.py" and returns a value. On the other hand when I execute other_script which imports settings, the values become None value.
settings.py:
import os
from dotenv import load_dotenv
from pathlib import Path
env_path = Path('.') / '.env'
load_dotenv(env_path)
MONGO_IP = os.getenv("MONGO_IP")
MONGO_PORT = os.getenv("MONGO_PORT")
MONGO_DB = os.getenv("MONGO_DB")
print(MONGO_DB)
other_script.py:
from pymongo import MongoClient
from settings import MONGO_IP, MONGO_PORT, MONGO_DB
print(MONGO_DB)
mongo_client = MongoClient(MONGO_IP, MONGO_PORT)[MONGO_DB]
So when I execute other_script.py, keys should return a value. What do I miss?
Two things to check are:
settings.py and other_script.py are in the same folder. Without this, other_script.py will not be able to find settings.py.
Look at your env and see if load_dotenv(env_path) is working properly. If env values for MONGO_* are not set properly you cannot read them.
If they are not in the same folder, the issue perhaps is that you don't have an __init__.py file in the folder you want to import from, since it is needed to make it a package. The init file can be empty.

How gunicorn and shared variables work

I have a class that I instantiate in a request (it's a ML model that loads and takes a bit of time to configure on startup). The idea is to only do that once and have each request use the model for predictions. Does gunicorn instantiate the app every time?
Aka, will the model retrain every time a new request comes in?
It sounds like you could benefit from the application preloading:
http://docs.gunicorn.org/en/stable/settings.html#preload-app
This will let you load app code before spinning off your workers.
For those who are looking for how to share a variable between gunicorn workers without using Redis or Session, here is a good alternative with the awesome python dotenv:
The principle is to read and write shared variables from a file that could be done with open() but dotenv is perfect in this situation.
pip install python-dotenv
In app directory, create .env file:
├── .env
└── app.py
.env:
var1="value1"
var2="value2"
app.py: # flask app
from flask import Flask
import os
from dotenv import load_dotenv
app = Flask( __name__ )
# define the path explicitly if not in same folder
#env_path = os.path.dirname(os.path.realpath(__file__)) +'/../.env'
#load_dotenv(dotenv_path=env_path) # you may need a first load
def getdotenv(env):
try:
#global env_path
#load_dotenv(dotenv_path=env_path,override=True)
load_dotenv(override=True)
val = os.getenv(env)
return val
except :
return None
def setdotenv(key, value): # string
global env_path
if key :
if not value:
value = '\'\''
cmd = 'dotenv -f '+env_path+' set -- '+key+' '+value # set env variable
os.system(cmd)
#app.route('/get')
def index():
var1 = getdotenv('var1') # retreive value of variable var1
return var1
#app.route('/set')
def update():
setdotenv('newValue2') # set variable var2='newValue2'

Flask: app.config settings from .env &. flaskenv in mod_wsgi

I have spent quite a while trying to figure out how to set .env and .flaskenv configuration values in my flask backend in Google Cloud Platform server. I am using apache2, mod_wsgi, Flask, Python 3.6 and SQLAlchemy. My backend works fine locally on my Mac using pure Flask.
Having python-dotenv installed, running the flask command will set environment variables defined in the files .env and .flaskenv. This, however, does not work with wsgi. The request from apache is redirected to execute my run.wsgi-file. There is no mechanism (that I have knowledge about) to set the environment variables defined in .env and .flaskenv.
The minimun requirement is to pass to the application information if test or development environment should be used. From there I could within init.py populate app.config values from an object. However, being somehow able to use config-values from .env and .flaskenv would be far better. I would really appreciate if somebody had any good ideas here - the best practice to set app.config values in wsgi environment.
There are two posts where this same problem has been presented - they really do not have a best practice how to tackle this challenge (and I am sure I am not the only one having a hard time with this):
Why can't Flask can't see my environment variables from Apache (mod_wsgi)?
Apache SetEnv not working as expected with mod_wsgi
My run.wsgi:
import sys
sys.path.append("/var/www/contacts-api/venv/lib/python3.6/site-packages")
sys.path.insert(0,"/var/www/contacts-api/")
from contacts import create_app
app = create_app('settings.py')
app.run()
[3]:Allows you to configure an application using pre-set methods.
from flask_appconfig import AppConfig
def create_app(configfile=None):
app = Flask('myapp')
AppConfig(app, configfile)
return app
The application returned by create_app will, in order:
Load default settings from a module called myapp.default_config, if it exists. (method described in http://flask.pocoo.org/docs/config/#configuring-from-files )
Load settings from a configuration file whose name is given in the environment variable MYAPP_CONFIG (see link from 1.).
Load json or string values directly from environment variables that start with a prefix of MYAPP_, i.e. setting MYAPP_SQLALCHEMY_ECHO=true will cause the setting of SQLALCHEMY_ECHO to be True.
Any of these behaviors can be altered or disabled by passing the appropriate options to the constructor or init_app().
[4]: Using “ENV-only”
If you only want to use the environment-parsing functions of Flask-AppConfig, the appropriate functions are exposed:
from flask_appconfig.heroku import from_heroku_envvars
from flask_appconfig.env import from_envvars
# from environment variables. note that you need to set the prefix, as
# no auto-detection can be done without an app object
from_envvars(app.config, prefix=app.name.upper() + '_')
# also possible: parse heroku configuration values
# any dict-like object will do as the first parameter
from_heroku_envvars(app.config)
After reading more about this and trying many different things. I reached to the conclusion that there is no reasonable way for configuring a Flask-application using .env- and .flaskenv -files. I ended up using a method presented in Configuration Handling which enables managing development/testing/production-environments in a reasonable manner:
app = Flask(__name__)
app.config.from_object('yourapplication.default_settings')
app.config.from_envvar('YOURAPPLICATION_SETTINGS')
My run.wsgi (being used in google cloud platform compute instance):
import sys
import os
from contacts import create_app
sys.path.append("/var/www/myapp/venv/lib/python3.6/site-packages")
sys.path.insert(0,"/var/www/myapp/")
os.environ['SETTINGS_PLATFORM_SPECIFIC'] = "/path/settings_platform_specific.py"
os.environ['CONFIG_ENVIRONMENT'] = 'DevelopmentConfig'
app = create_app(')
app.run()
Locally on my mac I use run.py (for flask run):
import os
from contacts import create_app
os.environ['SETTINGS_PLATFORM_SPECIFIC'] ="/path/settings_platform_specific.py"
os.environ['CONFIG_ENVIRONMENT'] = 'DevelopmentConfig'
if __name__ == '__main__':
app = create_app()
app.run()
For app creation init.py
def create_app():
app = Flask(__name__, template_folder='templates')
app.config.from_object(f'contacts.settings_common.{os.environ.get("CONFIG_ENVIRONMENT")}')
app.config.from_envvar('SETTINGS_PLATFORM_SPECIFIC')
db.init_app(app)
babel.init_app(app)
mail.init_app(app)
bcrypt.init_app(app)
app.register_blueprint(routes)
create_db(app)
return app
At this point it looks like this works out fine for my purposes. The most important thing is that I can easily manage different environments and deploy the backend service to google platform using git.
I was wrestling with the same conundrum, wanting to use the same .env file I was using with docker-compose while developing with flask run. I ended up using python-dotenv, like so:
In .env:
DEBUG=True
APPLICATION_ROOT=${PWD}
In config.py:
import os
from dotenv import load_dotenv
load_dotenv()
class Config(object):
SECRET_KEY = os.getenv('SECRET_KEY') or 'development-secret'
DEBUG = os.getenv("DEBUG") or False
APPLICATION_ROOT = os.getenv("APPLICATION_ROOT") or os.getcwd()
I haven't experimented with it yet, but I may also give flask-env a try in combination with this solution.
The easy to go will be using load_dotenv() and from_mapping
from flask import Flask
from dotenv import load_dotenv , dotenv_values
load_dotenv()
app = Flask(__name__)
config = dotenv_values()
app.config.from_mapping(config)

Categories