When I move the import of implementation module within the method in test module, the test works fine.However when I have the import on top , I get an error stating that environment variable is not found.
why is the environment variable not set when I place the import on top of the file and how I can fix it without moving the import inside a function
Error Message
test/test_engine.py:4: in <module>
from reptar_validation_engine import get_client_id
source/engine.py:30: in <module>
ATHENA_DB = os.environ['env']
venv/lib/python3.6/os.py:669: in __getitem__
raise KeyError(key) from None
E KeyError: 'env'
conftest.py
import pytest
#pytest.fixture(autouse=True)
def env_setup(monkeypatch):
monkeypatch.setenv('env', 'dev')
Test Module - This Fails
import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../', 'source')))
from engine import get_client_id
def test_get_client_id():
get_client_id()
Test Module - This Works
import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../', 'source')))
def test_get_client_id():
from engine import get_client_id
get_client_id()
engine.py
import os
env_val = os.environ['env']
def get_client_id:
pass
The only place you appear to be setting the 'env' environment variable is in the env_setup fixture. Like all fixtures, the code within the fixture only applies while a test is being run. When you try to import engine at the top level of your test module, no test is currently in effect, and so (unless you've set 'env' somewhere else) os.environ['env'] will be unset at that point. Importing engine from within a test function works because, before the test function is run, the fixture gives the environment variable a value.
I don't know what you're trying to accomplish by assigning os.environ['env'] to a top-level module variable, but you're probably going about it the wrong way. In particular, if you set the 'env' envvar beforehand so that module-level import works, then env_val will not be affected by the monkeypatching.
Related
I am writing test cases for a module module.py that imports from another module legacy.py. The legacy.py reads os.environ["some_var"] at module level. When I am trying to run test cases for module.py, they are failing with KeyError for some_var in os.environ.
This is how code looks like:
module.py
from legacy import db
def fun():
pass
legacy.py
import os
env = os.environ["some_var"]
class db:
def __init__(self):
pass
When running test cases for module.py, I am getting KeyError: 'some_var'.
I tried patching os (at module level also putting it before importing from module.py in test file) but the import statement is run before it could be patched. I tried looking for similar question on StackOverflow but didn't find the exact problem I am facing here.
How can I mock it? Or any other suggestion to be able to run the test cases. Assume that I cannot modify the legacy.py file.
You could use the mock.patch.dict. There is an example in official doc.
Here is a fully functional example based in your question.
module.py
import os
def get_env_var_value():
env_var = os.environ['PATH']
return env_var
print(get_env_var_value())
test_module.py
from unittest.mock import patch
from unittest import main, TestCase
import module
class MyClassTestCase(TestCase):
#patch.dict('os.environ', {'PATH': '/usr/sbin'})
def test_get_env_var_value(self):
self.assertEqual(module.get_env_var_value(), '/usr/sbin')
if __name__ == '__main__':
main()
Doesn't matter if the environment var is loaded in outside or inside the class/method scope, because you will mock for all test scope.
A variable isDevelopment is inside the manager/__init__.py file:
isDevelopment = True
Within the same directory a file fusion.py attempts to import it at the file level:
from . import isDevelopment
Note: pycharm is ambivalent to it: the import is not flagged in any case:
When attempting to import it from some other location e.g. .. pycharm does complain:
When running
python3 manager/fusion.py
the following occurs:
ImportError: cannot import name 'isDevelopment' from '__main__'
Another attempt per one of the suggestions:
from ..manager import isDevelopment
This results in:
ValueError: attempted relative import beyond top-level package
Why is this attempted import not working - and what needs to be changed?
./test.py
./manager/__init__.py
./manager/fusion.py
__init__.py
isDevelopment = True
./manager/fustion.py
from . import isDevelopment
def checkDevelopment():
print("isDevelopment = {0}".format(isDevelopment))
./test.py
import manager
if __name__ == "__main__":
print("isDevelopment = {0}".format(manager.isDevelopment))
manager.checkDevelopment()
Execute
python3 ./test.py
Output
isDevelopment = True
isDevelopment = True
Question
Are you attempting to execute manager/fusion.py to set the module or do you want it to be part of your executable application? If you simply want to know the value of isDevelopment within the manager module, that can be achieved. If you want an executable function contained in manager explore entry points using setup.py
__init__.py is used to initialize the package. According to documentation at https://docs.python.org/3/tutorial/modules.html#packages
Users of the package can import individual modules from the package
You don't import what is in __init__.py, it's run automatically when you do an import.
In the simplest case, init.py can just be an empty file, but it can also execute initialization code for the package
As isDevelopment is not a module you cannot import it! If you have another module fusion2.py you could import it with
from . import fusion2
and there you should be able to see isDevelopment.
I'm loading a submodule in python (2.7.10) with from app import sub where sub has a config variable. So I can run print sub.config and see a bunch of config variables. Not super complex.
If I change the config variables in the script, there must be a way to reload the module and see the change. I found a few instructions that indicated that reload(app.sub) would work, but I get an error:
NameError: name 'app' is not defined
And if I do just reload(sub) the error is:
TypeError: reload() argument must be module
If I do import app I can view the config with print app.sub.config and reload with reload(app)
-- if I do import app and then run
I found instructions to automate reloading:
Reloading submodules in IPython
but is there no way to reload a submodule manually?
With python3,I try this:
import importlib
import sys
def m_reload():
for k,v in sys.modules.items():
if k.startswith('your-package-name'):
importlib.reload(v)
When you from foo import bar, you now have a module object named bar in your namespace, so you can
from foo import bar
bar.do_the_thing() # or whatever
reload(bar)
If you want some more details on how different import forms work, I personally found this answer particularly helpful, myself.
from importlib import reload
import sys
ls=[]
#making copy to avoid regeneration of sys.modules
for i,j in sys.modules.items():
r,v=i,j
ls.append((r,v))
for i in ls:
if i[0] == 'my_library_name':
reload(i[1])
I have a Python module that tries to import a module, and if it fails, adds sets up a lightweight class to replicate some of the functionality as a fallback.
How can I write a unit test that covers both cases - when the module is present, and when the module is not present?
try:
from foo import bar
except ImportError:
class bar(object):
def doAThing(self):
return "blah"
For example, if bar is present, the unit tests will report that only the 2nd line is covered, and the rest of the module isn't. Python's coverage will report missing tests on the line with "except" to the end.
Modules are loaded from the sys.path array. If you clear that array then any non-core import will fail. The following commands were run in ipython in a virtual env with access to django
import sys
sys.path = []
import django
... ImportError ...
import os
... No ImportError ...
Your other alternative is to use two virtual environments. One which has the module and one which does not. That would make it slightly more difficult to run the tests though.
If this doesn't work then you can add a path to the start of the sys.path that has a module that will produce an ImportError when loaded. This works because the first match found when searching the sys.path is loaded, so this can replace the real module. The following is sufficient:
test.py
import sys
sys.path.insert(0, '.')
import foo
foo/__init__.py
raise ImportError()
I apologize if this is a basic question but I can't seem to find the answer here or on Google. Basically I'm trying to create a single config module that would be available to all other modules imported in a python application. Of course it works if I have import config in each file but I would like to make my config dynamic based on the environment the application is running in and I'd prefer not to have to copy the logic into every file.
Here's an example:
app.py:
import config
import submodule
# do other stuff
submodule.py:
print config.some_config_variable
But python of course complains that config isn't defined.
I did find some stuff about global variables but that didn't seem to work either. Here's what I tried:
Edit I changed this to show that I'd like the actual config being imported to be dynamic. However I do currently have a static config modle for my tests just to figure out how to import globally and then worry about that logic
app.py
# logic here that defines some_dynamic_config
global config
config = __import__(some_dynamic_config)
import submodule
submodule.py
print config.some_config_variable
But config still isn't defined.
I'm aware that I could create a single config.py and place logic to set the variables but I dislike that. I prefer the config file to just configuration and not contain a bunch of logic.
You've got to put your logic somewhere. Your config.py could be a module that determines which config files to load, something like this:
#config.py
import sys
from common_config import *
if sys.platform == 'darwin':
from mac_config import *
elif sys.platform == 'win32':
from win32_config import *
else:
from linux_config import *
With this approach, you can put common settings in common_settings.py, and platform-specific settings in their respective files. Since the platform-specific settings are imported after common_settings, you can also override anything in common_settings by defining it again in the platform-specific files.
This is a common pattern used in Django settings files, and works quite well there.
You could also wrap each import call with try... except ImportError: blocks if need be.
Modules are shared, so each module can import config without issue.
config.py itself can have logic to set it's global variables howver you like. As an example:
config.py:
import sys
if sys.platform == "win32":
temp = "c:\\temp"
else:
temp = "/tmp"
Now import config in any module to use it:
import config
print "Using tmp dir: %s" % config.temp
If you have a module that you know will be initialized before anything else, you can create an empty config.py, and then set it externally:
import config
config.temp = "c:\\temp"
But you'll need to run this code before anything else that uses it. The empty module can be used as a singleton.