calling settings.py from another python scripts - python

I have .env file where I have added environment settings. I wrote "settings.py" which reads .env file and stores values of settings. I want to import settings.py from other_script.py. But I am getting None as value.
I tried to execute "settings.py" and returns a value. On the other hand when I execute other_script which imports settings, the values become None value.
settings.py:
import os
from dotenv import load_dotenv
from pathlib import Path
env_path = Path('.') / '.env'
load_dotenv(env_path)
MONGO_IP = os.getenv("MONGO_IP")
MONGO_PORT = os.getenv("MONGO_PORT")
MONGO_DB = os.getenv("MONGO_DB")
print(MONGO_DB)
other_script.py:
from pymongo import MongoClient
from settings import MONGO_IP, MONGO_PORT, MONGO_DB
print(MONGO_DB)
mongo_client = MongoClient(MONGO_IP, MONGO_PORT)[MONGO_DB]
So when I execute other_script.py, keys should return a value. What do I miss?

Two things to check are:
settings.py and other_script.py are in the same folder. Without this, other_script.py will not be able to find settings.py.
Look at your env and see if load_dotenv(env_path) is working properly. If env values for MONGO_* are not set properly you cannot read them.

If they are not in the same folder, the issue perhaps is that you don't have an __init__.py file in the folder you want to import from, since it is needed to make it a package. The init file can be empty.

Related

How to load environment variables in Flask unit test

I'm trying to test an app which relies on several environment variables (API keys, mostly). I'd like to keep those as variables instead of putting them directly into a config file, but my unit tests (using unittest) won't run because the environment variables aren't loaded when the test mounts.
I've tried calling load_dotenv in setUp (see below) but that doesn't make a difference. How can I ensure that the test suite reads the environment variables correctly?
.flaskenv
FLASK_APP=myapp.py
BASE_URI='https://example.com'
OTHER_API_KEY='abc123itsasecret'
config.py
import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
SQLALCHEMY_DATABASE_URI = "sqlite:///"
API_URL = os.environ.get('BASE_URI') + "/api/v1"
SECRET_KEY = os.environ.get('OTHER_API_KEY')
test_file.py
import os
from dotenv import load_dotenv
import config
basedir = os.path.abspath(os.path.dirname(__file__))
class TestTheThing(unittest.TestCase):
def setUp(self):
load_dotenv(os.path.join(basedir, '.flaskenv'))
app.config.from_object(config.Config)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite://"
db.create_all()
self.client = app.test_client()
In the console after running python -m unittest myapp.test_file
File "/../../package/config.py", line 29, in Config
'API_URL': os.environ.get('BASE_URI') + 'api/v1/',
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
After reading more, I realized that I needed to load the variables in my config file, not in the test runner. Adding load_dotenv to the top of the config file allowed tests to run.
config.py
import os
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.flaskenv'))
# rest of file
I spent time trying to figure out where the crash was happening. I added a breakpoint inside setUp that never got hit, so it had to be earlier in the execution.
Reading the stack trace more carefully, it was caused by importing app at the top of the test file. When the app is imported, it tries to pull all the variables in before it could call load_dotenv inside setUp. Loading environment variables before the config object was loaded into the Flask object cleared it up.

Import variables from a config File

I have a script that have multiple files and a config file with all the variables store in one Python file.
Folder structure:
Config file:
If I try to run the main file which calls the head function imported, an error pops up saying that the config cannot be imported.
Imports:
Your Functions folder has a __init__.py file. If your app executes from Main.py (ie if Main.py satisfies __name__ == "__main__") therefore wherever you are in your app you could import the config like this:
from Functions.Config import *
Edit:
However,from module import * is not recommended with local import. You could give an alias to the import and call name.variable instead of calling name directly.
Example:
def head():
# import packages
import Function.Config as conf
print(conf.more_200)
head()
>>> 50
Your syntax are wrong.
It is supposed to be from Config import * and not import .Config import *

environment variables not updating

I am using the dotenv package. I had a key that I had saved in my .env file but I updated it to a new key, but my script still outputs the old key. I have the ".env" file in the root directory.
I thought that by using load_dotenv() that it's taking in the new keys whatever they may be at the current state in time and saving it to be used in the script. What am I doing wrong?
import os
from dotenv import load_dotenv
import praw
load_dotenv()
reddit = praw.Reddit(client_id=os.getenv('reddit_personal_use'),
client_secret=os.getenv('reddit_api_key'),
user_agent=os.getenv('reddit_app_name'),
username=os.getenv('reddit_username'),
password=os.getenv('reddit_pw'))
I had to set override=True
load_dotenv(override=True)
load_dotenv does not override existing System environment variables. To override, pass override=True to load_dotenv().

How to access an AWS Lambda environment variable from Python

Using the new environment variable support in AWS Lambda, I've added an env var via the webui for my function.
How do I access this from Python? I tried:
import os
MY_ENV_VAR = os.environ['MY_ENV_VAR']
but my function stopped working (if I hard code the relevant value for MY_ENV_VAR it works fine).
AWS Lambda environment variables can be defined using the AWS Console, CLI, or SDKs. This is how you would define an AWS Lambda that uses an LD_LIBRARY_PATH environment variable using AWS CLI:
aws lambda create-function \
--region us-east-1
--function-name myTestFunction
--zip-file fileb://path/package.zip
--role role-arn
--environment Variables={LD_LIBRARY_PATH=/usr/bin/test/lib64}
--handler index.handler
--runtime nodejs4.3
--profile default
Once created, environment variables can be read using the support your language provides for accessing the environment, e.g. using process.env for Node.js. When using Python, you would need to import the os library, like in the following example:
...
import os
...
print("environment variable: " + os.environ['variable'])
Resource Link:
AWS Lambda Now Supports Environment Variables
Assuming you have created the .env file along-side your settings module.
.
├── .env
└── settings.py
Add the following code to your settings.py
# settings.py
from os.path import join, dirname
from dotenv import load_dotenv
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
Alternatively, you can use find_dotenv() method that will try to find a .env file by (a) guessing where to start using file or the working directory -- allowing this to work in non-file contexts such as IPython notebooks and the REPL, and then (b) walking up the directory tree looking for the specified file -- called .env by default.
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
Now, you can access the variables either from system environment variable or loaded from .env file.
Resource Link:
https://github.com/theskumar/python-dotenv
gepoggio answered in this post: https://github.com/serverless/serverless/issues/577#issuecomment-192781002
A workaround is to use python-dotenv:
https://github.com/theskumar/python-dotenv
import os
import dotenv
dotenv.load_dotenv(os.path.join(here, "../.env"))
dotenv.load_dotenv(os.path.join(here, "../../.env"))
It tries to load it twice because when ran locally it's in
project/.env and when running un Lambda the .env is located in
project/component/.env
Both
import os
os.getenv('MY_ENV_VAR')
And
os.environ['MY_ENV_VAR']
are feasible solutions, just make sure in the lambda GUI that the ENV variables are actually there.
I used this code; it includes both cases, setting the variable from the handler and setting it from outside the handler.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Trying new lambda stuff"""
import os
import configparser
class BqEnv(object):
"""Env and self variables settings"""
def __init__(self, service_account, configfile=None):
config = self.parseconfig(configfile)
self.env = config
self.service_account = service_account
#staticmethod
def parseconfig(configfile):
"""Connection and conf parser"""
config = configparser.ConfigParser()
config.read(configfile)
env = config.get('BigQuery', 'env')
return env
def variable_tests(self):
"""Trying conf as a lambda variable"""
my_env_var = os.environ['MY_ENV_VAR']
print my_env_var
print self.env
return True
def lambda_handler(event, context):
"""Trying env variables."""
print event
configfile = os.environ['CONFIG_FILE']
print configfile
print type(str(configfile))
bqm = BqEnv('some-json.json', configfile)
bqm.variable_tests()
return True
I tried this with a demo config file that has this:
[BigQuery]
env = prod
And the setting on lambda was the following:
Hope this can help!
os.environ["variable_name"]
In the configuration section of AWS lambda, make sure you declare the variable with the same name that you're trying to access here. For this example, it should be variable_name

Django: where is settings.py looking for imports, and why?

I have a Django app with a common directory structure:
project
---manage.py
---app
---__init__.py
---settings.py
---settings_secret.py
---a bunch of other files for the app itself
That settings_secret.py file contains variables from secrets.py which I do not want to send to github. For some reason, I cannot seem to import it into settings.py. First 5 lines of settings.py:
# Django settings for project.
DEBUG = True
TEMPLATE_DEBUG = DEBUG
import os
from settings_secret import *
Which fails with the partial stacktrace:
File "/foo/bar/project/app/settings.py", line 5, in <module> from settings_secret import *
ImportError: No module named 'settings_secret'
To debug, I created a test file inside /project/ like so:
from settings_secret import *
print(VARIABLE_FROM_SETTINGS_SECRET)
Which worked like a charm. So clearly, settings.py isn't looking in the right place for settings_secret. So where is it looking?
In settings.py, you should do: from .settings_secret import *
It works with the . because the proper syntax is supposed to be:
from app.settings_secret import *
Removing the app is shorthand coding, but the same principle applies. You need to tell Django to look for the directory (app), and then you are specifying which file in that directory to look for.
If you just did, from settings_secret import *, you are telling Django to look for the directory settings_secret, which doesn't exist.
Does that make sense to you?

Categories