I am writing a lot of tests at the moment that all use pytest. So the first time of every file is import pytest. Is there a way to import it somewhere else for example the __init__.py
tests
- unit_tests
- __init__.py
- test_service.py
- test_plan.py
- test_api.py
An import statement has two purposes:
Define a module, if necessary
Add a name for that module in the current scope.
While putting import pytest in __init__.py would take care of the first one, it does nothing for the second. You have no way of using pytest in each module unless you import sys and use sys.modules['pytest'] everywhere you would have used pytest alone. But that's ugly, so you might think, "Hey, I'll just write
pytest = sys.modules['pytest']
to make a global name pytest refer to the module."
But that's exactly what import pytest already does.
Related
I would like a way to detect if my module was executed directly, as in import module or from module import * rather than by import module.submodule (which also executes module), and have this information accessible in module's __init__.py.
Here is a use case:
In Python, a common idiom is to add import statement in a module's __init__.py file, such as to "flatten" the module's namespace and make its submodules accessible directly. Unfortunately, doing so can make loading a specific submodule very slow, as all other siblings imported in __init__.py will also execute.
For instance:
module/
__init__.py
submodule/
__init__.py
...
sibling/
__init__.py
...
By adding to module/__init__.py:
from .submodule import *
from .sibling import *
It is now possible for users of the module to access definitions in submodules without knowing the details of the package structure (i.e. from module import SomeClass, where SomeClass is defined somewhere in submodule and exposed in its own __init__.py file).
However, if I now run submodule directly (as in import module.submodule, by calling python3 -m module.submodule, or even indirectly via pytest) I will also, unavoidably, execute sibling! If sibling is large, this can slow things down for no reason.
I would instead like to write module/__init__.py something like:
if __???__ == 'module':
from .submodule import *
from .sibling import *
Where __???__ gives me the fully qualified name of the import. Any similar mechanism would also work, although I'm mostly interested in the general case (detecting direct executing) rather than this specific example.
What is being desired is will result in undefined behavior (in the sense whether or not the flattened names be importable from module) when we consider how the import system actually works, if it were actually possible.
Hypothetically, if what you want to achieve is possible, where some __dunder__ that will disambiguate which import statement was used to import module/__init__.py (e.g. import module and from module import *, vs import module.submodule. For the first case, module may trigger the subsequent (slow) import to produce a "flattened" version of the desired imports, while the latter case (import module.submodule) will avoid that and thus module will not contain any assignments of the "flattened" imports.
To illustrate the example a bit more, say one may import SiblingClass from module.sibling.SiblingClass by simply doing from module import SiblingClass as the module/__init__.py file executes from .sibling import * statement to create that binding. But then, if executing import module.submodule resulting in the avoidance of that flatten import, we get the following scenario:
import module.submodule
# module.submodule gets imported
from module import SiblingClass
# ImportError will occur
Why is that? This is simply due to how Python imports a file - the source file is executed in its entirety once to assign imports, function and class declarations to the designated names, and be registered to sys.modules under its import name. Importing the module again will not execute the file again, thus if the from .sibling import * statement was not executed during its initial import (i.e. import module.submodule), it will never be executed again during subsequent import of the same module, as the copy produced by the initial import assigned to its module entry in sys.module is returned (unless the module was reloaded manually, the code for the module will be executed again).
You may verify this fact by putting in a print statement into a file, import the corresponding module to see the output produced, and see that no further output will be produced on subsequent import of that module (related: What happens when a module is imported twice?).
Effectively, the desired functionality as described in the question cannot be implemented in Python.
A related thread on this topic: How to only import sub module without exec __init__.py in the package
This is not a complete solution, but standalone py.test (ignore __init__.py files) proposes setting a global flag to detect when in test. This corrects the problem for tests at least, provided the concerned modules don't call each other.
I am writing a pytest plugin that should test software that's designed to work inside a set of specific environments.
The software I'm writing is run inside a bigger framework, which makes certain Python modules available only when running my Python software inside the framework.
In order to test my software, I'm required to "mock" or fake an entire module (actually, quite a few). I'll need to implement its functionality in some kind of similar-looking way, but my question is how should I make this fake Python module available to my software's code, using a py.test plugin?
For example, let's assume I have the following code in one of my source files:
import fwlib
def fw_sum(a, b):
return fwlib.sum(a, b)
However, the fwlib module is only made available by the framework I run my software from, and I cannot test inside it.
How would I make sure, from within a pytest plugin, that a module named fwlib is already defined in sys.modules? Granted, I'll need to implement fwlib.sum myself. I'm looking for recommendations on how to do just that.
pytest provides a fixture for this use-case: monkeypatch.syspath_prepend.
You may prepend a path to sys.path list of import locations. Write a fake fwlib.py and include it in your tests, appending the directory as necessary. Like the other test modules, it needn't be included with the distribution.
After playing with this myself, I couldn't actually figure out how to get the fixture to mock module level imports correctly from the library code. By the time the tests run, the library code was already imported and then it is too late to patch.
However, I can offer a different solution that works: you may inject the name from within conftest.py, which gets imported first. The subsequent import statement within the code under test will just re-use the object already present in sys.modules.
Package structure:
$ tree .
.
├── conftest.py
├── lib
│ └── my_lib.py
└── tests
└── test_my_lib.py
2 directories, 3 files
Contents of files:
# conftest.py
import sys
def fwlib_sum(a, b):
return a + b
module = type(sys)('fwlib')
module.sum = fwlib_sum
sys.modules['fwlib'] = module
library file:
# lib/my_lib.py
import fwlib
def fw_sum(a, b):
return fwlib.sum(a, b)
test file:
# lib/test_my_lib.py
import my_lib
def test_sum():
assert my_lib.fw_sum(1, 2) == 3
Just to provide a little more details to #wim's good answer, you can use it with submodules too, like so:
import sys
module = type(sys)("my_module_name")
module.submodule = type(sys)("my_submodule_name")
module.submodule.something = sommething
sys.modules["my_module_name"] = module
sys.modules["my_module_name.my_submodule_name"] = module.submodule
I have the following setup. The first file (config.py) defines a path, that tells file1.py which module to import.
#file:config.py
moduleToBeImported="/here/is/some/path/file2.py"
import file1
Then in file1.py, I import the module that is defined in config.py
#file:file1.py
import imp
foo = imp.load_source('module.name',moduleToBeImported)
Is it possible to pass the variable moduleToBeImported from config.py to file1.py?
In the current setup I get expected error: NameError: name 'moduleToBeImported' is not defined
Short Answer - NO
Long Answer - Maybe, you can. And NO, you shouldn't. Circular imports would result in import cycles. And that is bad.
For example, let's say you imported config.py in file1.py. As soon as you run file1.py and it calls up config.py, the code in config.py runs just like any other python file. At this point, you would end up trying to import file1.py from file1.py.
Python may or may not detect this cycle before it breaks havoc on your system.
Generally speaking, circular imports are a very bad coding practice.
What you can do instead - Your config.py should contain bare minimal runnable code. Instead keep all configuration variables and settings and general utility methods in there. In short, if file1.py contains critical code, it shouldn't be imported into config.py. You can import config.py in file1.py though.
More reading here: Python circular importing?
i've run through many posts about this, but still doesn't seem to work. The deal is pretty cut. I've the got the following hierarchy.
main.py
DirA/
__init__.py
hello.py
DirB/
__init__.py
foo.py
bla.py
lol.py
The__init__.py at DirA is empty. The respective one at DirB just contains the foo module.
__all__.py = ["foo"]
The main.py has the following code
import DirA
import DirB
hey() #Def written at hello.py
foolish1() #Def written at foo.py
foolish2() #Def written at foo.py
Long story short, I got NameError: name 'foo' is not defined. Any ideas? Thanks in advance.
You only get what you import. Therefore, in you main, you only get DirA and DirB. You would use them in one of those ways:
import DirA
DirA.something_in_init_py()
# Importing hello:
import DirA.hello
DirA.hello.something_in_hello_py()
# Using a named import:
from DirA.hello import something_in_hello_py
something_in_hello_py()
And in DirB, just make the __init__.py empty as well. The only use of __all__ is for when you want to import *, which you don't want because, as they say, explicit is better than implicit.
But in case you are curious, it would work this way:
from DirB import *
something_in_dirb()
By default the import * will import everything it can find that does not start with an underscore. Specifying a __all__ restricts what it imported to the names defined in __all__. See this question for more details.
Edit: about init.
The __init__.py is not really connected to the importing stuff. It is just a special file with the following properties:
Its existence means the directory is a python package, with several modules in it. If it does not exist, python will refuse to import anything from the directory.
It will always be loaded before loading anything else in the directory.
Its content will be available as the package itself.
Just try it put this in DirA/__init__.py:
foo = 42
Now, in your main:
from DirA import foo
print(foo) # 42
It can be useful, because you can import some of your submodules in the __init__.py to hide the inner structure of your package. Suppose you build an application with classes Author, Book and Review. To make it easier to read, you give each class its own file in a package. Now in your main, you have to import the full path:
from myapp.author import Author
from myapp.book import Book
from myapp.review import Review
Clearly not optimal. Now suppose you put those exact lines above in your __init__.py, you may simplify you main like this:
from myapp import Author, Book, Review
Python will load the __init__.py, which will in turn load all submodules and import the classes, making them available on the package. Now your main does not need to know where the classes are actually implemented.
Have you tried something like this:
One way
from DirA import hello
Another way
from DirA.hello import hey
If those don't work then append a new system path
You need to import the function itself:
How to call a function from another file in Python?
In your case:
from DirA import foolish1, foolish2
Let's say I have such a directory structure:
- Documents/
- thesis_program/
- __init__.py
- classes.py
- utils.py
- GE_Test.py
- GE_Test_fail.py
classes.py and utils.py contains some classes and functions.
GE_Test.py and GE_Test_fail.py contains the exactly same code, except the import part.
In GE_Test.py I import classes and utils this way:
from utils import execute
from classes import Grammatical_Evolution
While in GE_Test_fail.py, I import classes and utils this way:
from thesis_program.utils import execute
from thesis_program.classes import Grammatical_Evolution
And unexpectedly I get a different result. Is there anything wrong here?
Do I import the modules correctly?
I can ensure that the result should be the same, because I generate the random number with certain seed
Also classes.py is somehow depended on utils.py since I have several common functions in utils.py. I suspect that utils is also a name used by the system. So in the second case (GE_Test_fail.py) The system utils override my utils.py. But it doesn't seem make sense for me.
The complete source code of classes.py and utils.py is available here (if it helps to discover what's wrong): https://github.com/goFrendiAsgard/feature-extractor
And also, the screenshots: https://picasaweb.google.com/111207164087437537257/November25201204?authuser=0&authkey=Gv1sRgCOKxot2a2fTtlAE&feat=directlink
add below mentioned lines to your test files which are going outside of your thesis folder.
import sys sys.path.insert(0,"/path to your thesis folder/thesis_program")
and maintain everything else; for example in GE_Test.py.. .
import sys
sys.path.insert(0,"/path to your thesis folder/thesis_program")
from utils import execute
from classes import Grammatical_Evolution
EDIT:
Or use this to make it more dynamic
(caution: don't try to find the path by os.path.abspath('./thesis_program') because it may not be always possible that you find your test_files and your thesis_folder are in the same dir; if you can fix them permanently in your code like above; then you are free to use them from anywhere on your system)
import os, sys
lib_path = os.path.abspath('./thesis_program')
sys.path.insert(0,lib_path)