Related
I used easy_install to install pytest on a Mac and started writing tests for a project with a file structure likes so:
repo/
|--app.py
|--settings.py
|--models.py
|--tests/
|--test_app.py
Run py.test while in the repo directory, and everything behaves as you would expect.
But when I try that same thing on either Linux or Windows (both have pytest 2.2.3 on them), it barks whenever it hits its first import of something from my application path. For instance, from app import some_def_in_app.
Do I need to be editing my PATH to run py.test on these systems?
I'm not sure why py.test does not add the current directory in the PYTHONPATH itself, but here's a workaround (to be executed from the root of your repository):
python -m pytest tests/
It works because Python adds the current directory in the PYTHONPATH for you.
Recommended approach for pytest>=7: use the pythonpath setting
Recently, pytest has added a new core plugin that supports sys.path modifications via the pythonpath configuration value. The solution is thus much simpler now and doesn't require any workarounds anymore:
pyproject.toml example:
[tool.pytest.ini_options]
pythonpath = [
"."
]
pytest.ini example:
[pytest]
pythonpath = .
The path entries are calculated relative to the rootdir, thus . adds repo directory to sys.path in this case.
Multiple path entries are also allowed: for a layout
repo/
├── src/
| └── lib.py
├── app.py
└── tests
├── test_app.py
└── test_lib.py
the configuration
[tool.pytest.ini_options]
pythonpath = [
".", "src",
]
or
[pytest]
pythonpath = . src
will add both app and lib modules to sys.path, so
import app
import lib
will both work.
Original answer (not recommended for recent pytest versions; use for pytest<7 only): conftest solution
The least invasive solution is adding an empty file named conftest.py in the repo/ directory:
$ touch repo/conftest.py
That's it. No need to write custom code for mangling the sys.path or remember to drag PYTHONPATH along, or placing __init__.py into dirs where it doesn't belong (using python -m pytest as suggested in Apteryx's answer is a good solution though!).
The project directory afterwards:
repo
├── conftest.py
├── app.py
├── settings.py
├── models.py
└── tests
└── test_app.py
Explanation
pytest looks for the conftest modules on test collection to gather custom hooks and fixtures, and in order to import the custom objects from them, pytest adds the parent directory of the conftest.py to the sys.path (in this case the repo directory).
Other project structures
If you have other project structure, place the conftest.py in the package root dir (the one that contains packages but is not a package itself, so does not contain an __init__.py), for example:
repo
├── conftest.py
├── spam
│ ├── __init__.py
│ ├── bacon.py
│ └── egg.py
├── eggs
│ ├── __init__.py
│ └── sausage.py
└── tests
├── test_bacon.py
└── test_egg.py
src layout
Although this approach can be used with the src layout (place conftest.py in the src dir):
repo
├── src
│ ├── conftest.py
│ ├── spam
│ │ ├── __init__.py
│ │ ├── bacon.py
│ │ └── egg.py
│ └── eggs
│ ├── __init__.py
│ └── sausage.py
└── tests
├── test_bacon.py
└── test_egg.py
beware that adding src to PYTHONPATH mitigates the meaning and benefits of the src layout! You will end up with testing the code from repository and not the installed package. If you need to do it, maybe you don't need the src dir at all.
Where to go from here
Of course, conftest modules are not just some files to help the source code discovery; it's where all the project-specific enhancements of the pytest framework and the customization of your test suite happen. pytest has a lot of information on conftest modules scattered throughout their docs; start with conftest.py: local per-directory plugins
Also, SO has an excellent question on conftest modules: In py.test, what is the use of conftest.py files?
I had the same problem. I fixed it by adding an empty __init__.py file to my tests directory.
Yes, the source folder is not in Python's path if you cd to the tests directory.
You have two choices:
Add the path manually to the test files. Something like this:
import sys, os
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../')
Run the tests with the env var PYTHONPATH=../.
Run pytest itself as a module with:
python -m pytest tests
This happens when the project hierarchy is, for example, package/src package/tests and in tests you import from src. Executing as a module will consider imports as absolute rather than relative to the execution location.
You can run with PYTHONPATH in project root
PYTHONPATH=. py.test
Or use pip install as editable import
pip install -e . # install package using setup.py in editable mode
I had the same problem in Flask.
When I added:
__init__.py
to the tests folder, the problem disappeared :)
Probably the application couldn't recognize folder tests as a module.
I created this as an answer to your question and my own confusion. I hope it helps. Pay attention to PYTHONPATH in both the py.test command line and in the tox.ini.
https://github.com/jeffmacdonald/pytest_test
Specifically: You have to tell py.test and tox where to find the modules you are including.
With py.test you can do this:
PYTHONPATH=. py.test
And with tox, add this to your tox.ini:
[testenv]
deps= -r{toxinidir}/requirements.txt
commands=py.test
setenv =
PYTHONPATH = {toxinidir}
I fixed it by removing the top-level __init__.py in the parent folder of my sources.
I started getting weird ConftestImportFailure: ImportError('No module named ... errors when I had accidentally added __init__.py file to my src directory (which was not supposed to be a Python package, just a container of all source).
It is a bit of a shame that this is an issue in Python... But just adding this environment variable is the most comfortable way, IMO:
export PYTHONPATH=$PYTHONPATH:.
You can put this line in you .zshrc or .bashrc file.
I was having the same problem when following the Flask tutorial and I found the answer on the official Pytest documentation.
It's a little shift from the way I (and I think many others) are used to do things.
You have to create a setup.py file in your project's root directory with at least the following two lines:
from setuptools import setup, find_packages
setup(name="PACKAGENAME", packages=find_packages())
where PACKAGENAME is your app's name. Then you have to install it with pip:
pip install -e .
The -e flag tells pip to install the package in editable or "develop" mode. So the next time you run pytest it should find your app in the standard PYTHONPATH.
I had a similar issue. pytest did not recognize a module installed in the environment I was working in.
I resolved it by also installing pytest into the same environment.
Also if you run pytest within your virtual environment make sure pytest module is installed within your virtual environment. Activate your virtual environment and run pip install pytest.
For me the problem was tests.py generated by Django along with tests directory. Removing tests.py solved the problem.
I got this error as I used relative imports incorrectly. In the OP example, test_app.py should import functions using e.g.
from repo.app import *
However liberally __init__.py files are scattered around the file structure, this does not work and creates the kind of ImportError seen unless the files and test files are in the same directory.
from app import *
Here's an example of what I had to do with one of my projects:
Here’s my project structure:
microbit/
microbit/activity_indicator/activity_indicator.py
microbit/tests/test_activity_indicator.py
To be able to access activity_indicator.py from test_activity_indicator.py I needed to:
start test_activity_indicatory.py with the correct relative import:
from microbit.activity_indicator.activity_indicator import *
put __init__.py files throughout the project structure:
microbit/
microbit/__init__.py
microbit/activity_indicator/__init__.py
microbit/activity_indicator/activity_indicator.py
microbit/tests/__init__.py
microbit/tests/test_activity_indicator.py
According to a post on Medium by Dirk Avery (and supported by my personal experience) if you're using a virtual environment for your project then you can't use a system-wide install of pytest; you have to install it in the virtual environment and use that install.
In particular, if you have it installed in both places then simply running the pytest command won't work because it will be using the system install. As the other answers have described, one simple solution is to run python -m pytest instead of pytest; this works because it uses the environment's version of pytest. Alternatively, you can just uninstall the system's version of pytest; after reactivating the virtual environment the pytest command should work.
I was getting this error due to something even simpler (you could even say trivial). I hadn't installed the pytest module. So a simple apt install python-pytest fixed it for me.
'pytest' would have been listed in setup.py as a test dependency. Make sure you install the test requirements as well.
Since no one has suggested it, you could also pass the path to the tests in your pytest.ini file:
[pytest]
...
testpaths = repo/tests
See documentation: https://docs.pytest.org/en/6.2.x/customize.html#pytest-ini
Side effect for Visual Studio Code: it should pick up the unit test in the UI.
We have fixed the issue by adding the following environment variable.
PYTHONPATH=${PYTHONPATH}:${PWD}/src:${PWD}/test
As pointed out by Luiz Lezcano Arialdi, the correct solution is to install your package as an editable package.
Since I am using Pipenv, I thought about adding to his answer a step-by-step how to install the current path as an edible with Pipenv, allowing to run pytest without the need of any mangling code or lose files.
You will need to have the following minimal folder structure (documentation):
package/
package/
__init__.py
module.py
tests/
module_test.py
setup.py
setup.py mostly has the following minium code (documentation):
import setuptools
setuptools.setup(name='package', # Change to your package name
packages=setuptools.find_packages())
Then you just need to run pipenv install --dev -e . and Pipenv will install the current path as an editable package (the --dev flag is optional) (documentation).
Now you should be able to run pytest without problems.
If this pytest error appears not for your own package, but for a Git-installed package in your package's requirements.txt, the solution is to switch to editable installation mode.
For example, suppose your package's requirements.txt had the following line:
git+https://github.com/foo/bar.git
You would instead replace it with the following:
-e git+https://github.com/foo/bar.git#egg=bar
If nothing works, make sure your test_module.py is listed under the correct src directory.
Sometimes it will give ModuleNotFoundError not because modules are misplaced or export PYTHONPATH="${PWD}:${PYTHONPATH}" is not working, its because test_module.py is placed into a wrong directory under the tests folder.
it should be 1-to-1 mapping relation recursively instead of the root folder should be named as "tests" and the name of the file that include test code should starts with "test_",
for example,
./nlu_service/models/transformers.py
./tests/models/test_transformers.py
This was my experience.
Very often the tests were interrupted due to module being unable to be imported.
After research, I found out that the system is looking at the file in the wrong place and we can easily overcome the problem by copying the file, containing the module, in the same folder as stated, in order to be properly imported.
Another solution proposal would be to change the declaration for the import and show MutPy the correct path of the unit. However, due to the fact that multiple units can have this dependency, meaning we need to commit changes also in their declarations, we prefer to simply move the unit to the folder.
My solution:
Create the conftest.py file in the test directory containing:
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + "/relative/path/to/code/")
This will add the folder of interest to the Python interpreter path without modifying every test file, setting environment variable or messing with absolute/relative paths.
I want my tests folder separate to my application code. My project structure is like so
myproject/
myproject/
myproject.py
moduleone.py
tests/
myproject_test.py
myproject.py
from moduleone import ModuleOne
class MyProject(object)
....
myproject_test.py
from myproject.myproject import MyProject
import pytest
...
I use myproject.myproject since I use the command
python -m pytest
from the project root directory ./myproject/
However, then the imports within those modules fail with
E ModuleNotFoundError: No module named 'moduleone'
I am running Python 3.7 and have read that since 3.3, empty __init__ files are no longer needed which means my project becomes an implicit namespace package
However, I have tried adding an __init__.py file in myproject/myproject/ and also tried adding a conftest.py file in myproject/ but neither works
I have read answers that say to mess with the paths and then upvoted comments in other questions saying not to.
What is the correct way and what am I missing?
EDIT;
Possibly related, I used a requirements.txt to install pytest using pip. Could this be related? And if so, what is the correct way to install pytest in this case?
EDIT 2:
One of the paths in sys.path is /usr/src/app/ which is a docker volume lined to /my/local/path/myproject/.
Should the volume be /my/local/path/myproject/myproject/ instead?
Not sure if this solution was specific to my problem, but I simply add __init__.py to my tests folder and that solved the problem.
Solution: use the PYTHONPATH env. var
PYTHONPATH=. pytest
As mentioned by #J_H, you need to explicitly add the root directory of your project, since pytest only adds to sys.path directories where test files are (which is why #Mak2006's answer worked.)
Good practice: use a Makefile or some other automation tool
If you do not want to type that long command all the time, one option is to create a Makefile in your project's root dir with, e.g., the following:
.PHONY: test
test:
PYTHONPATH=. pytest
Which allows you to simply run:
make test
Another common alternative is to use some standard testing tool, such as tox.
Be sure to include . dot in the $PYTHONPATH env var.
Use $ python -m site, or this code fragment to debug such issues:
import pprint
import sys
pprint.pprint(sys.path)
Your question managed to use myproject at three different levels. At least during debugging you might want to use three distinct names, to reduce possible confusion.
In my case I added a __init__.py to my test directory with this inside it:
import sys
sys.path.append('.')
My app code is at the same level as my test directory.
In my case it is because I installed pytest on the system level but not in my virtual environment.
You can test this by python -m pytest. If you see ModuleNotFoundError: No module named 'pytest' then your pytest is at the system level.
Install pytest when the virtual environment is activated will fix this.
Kept everything same and just added a blank test file at the root folder .. Solved
Here are the findings, this problem really bugged me for a while.
My folder structure was
mathapp/
- server.py
- configuration.py
- __init__.py
- static/
- home.html
tests/
- functional
- test_errors.py
- unit
- test_add.py
and pytest would complain with the ModuleNotFoundError.
I introduced a mock test file at the same level as mathsapp and tests directory. The file contained nothing. Now pytest does not complain.
Result without the file
$ pytest
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
rootdir: C:\mak2006\workspace\0github\python-rest-app-cont
collected 1 item / 1 error
=================================== ERRORS ====================================
_______________ ERROR collecting tests/functional/test_func.py ________________
ImportError while importing test module 'C:\mainak\workspace\0github\python-rest-app-cont\tests\functional\test_func.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests\functional\test_func.py:4: in <module>
from mathapp.service import sum
E ModuleNotFoundError: No module named 'mathapp'
=========================== short test summary info ===========================
ERROR tests/functional/test_func.py
!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
============================== 1 error in 0.24s ===============================
Results with the file
$ pytest
============================= test session starts =============================
platform win32 -- Python 3.8.2, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
rootdir: C:\mak2006\workspace\0github\python-rest-app-cont
collected 2 items
tests\functional\test_func.py . [ 50%]
tests\unit\test_unit.py . [100%]
============================== 2 passed in 0.11s ==============================
Better Solution
Try adding a single __init__.py to your tests directory (a level up from your module) with this contents:
import sys
sys.path.append('.')
sys.path.append('./my_module')
Your file structure should look like this:
project
my_module
package.py
tests
__init__.py
my_tests.py
The first append to sys.path will enable you to import from <your-module-name> and the second will enable your packages to import as normal.
In your tests you can import by using from my_module.package import function whereas in your module import using simply from package import function.
Edit: Seems like this solution is not universal (like the others).
I was able to solve this issue using help from this answer.
Add an __init__.py to your main module directory that contains
import pathlib, sys
sys.path.append(str(pathlib.Path(__file__).parent))
I also added another __init__.py to my tests directory (thanks to this answer) with
import sys
sys.path.append('.')
So it seems that the sys.path has to include the application directory rather than the project root folder containing the application directory and test directory.
So in my case /my/local/path/myproject/myproject/ had to be in sys.path rather than /my/local/path/myproject/.
Then I could run pytest in /my/local/path/myproject/ (didn't need python -m pytest). This meant that the modules within /myproject/myproject/ could find each other and the tests as well without any namespace nesting.
So my tests looked like
from moduleone import ModuleOne
import pytest
def test_fun():
assert ModuleOne.example_func() == True
That said, there seem to be many gotchas, so I have no idea if this is correct..
I suggest you have a code structure like this:
myproject/
helpers/
moduleone.py
moduletwo.py
tests/
myproject_test.py
conftest.py
And the content of conftest.py file is:
pytest_plugins = ['helpers']
Run pytest again.
Using poetry and pytest 5.4.3, I had the following structure (some folders / files have been removed for clarity):
project structure
.
├── my_app
│ ├── __init__.py
│ ├── main.py
│ ├── model.py
│ └── orm.py
├── poetry.lock
├── pyproject.toml
├── README.rst
└── tests
├── __init__.py
├── conftest.py
├── test_my_app.py
└── utilities
└── db_postgresql_inmemory.py
tests/conftest.py
pytest_plugins = [
"utilities.db_postgresql_inmemory",
]
which generated a module not found error for the fixture:
ImportError: Error importing plugin "utilities.db_postgresql_inmemory": No module named 'utilities'
None of the other answers have worked for me, as I have tried to add:
[me#linux ~/code/my_app]touch tests/utilities/__init__.py
[me#linux ~/code/my_app]touch ./test_blank.py
I could make the import from conftest.py work by REMOVING both __init__.py files:
[me#linux ~/code/my_app]rm tests/utilities/__init__.py tests/__init__.py
In 2023.02, according to the document of pytest, you can simply add following config to your pyproject.toml to solve this problem
[tool.pytest.ini_options]
pythonpath = "src"
addopts = [
"--import-mode=importlib",
]
I ran into this issue as well and am using poetry for dependency management and direnv for my project specific environment variables. Please note, I am relatively new to Python so I don't know if this is the correct fix.
Here is my entire .envrc file:
layout_poetry() {
if [[ ! -f pyproject.toml ]]; then
log_error 'No pyproject.toml found. Use `poetry new` or `poetry init` to create one first.'
exit 2
fi
local VENV=$(poetry env list --full-path | cut -d' ' -f1)
if [[ -z $VENV || ! -d $VENV/bin ]]; then
log_error 'No created poetry virtual environment found. Use `poetry install` to create one first.'
exit 2
fi
VENV=$VENV/bin
export VIRTUAL_ENV=$(echo "$VENV" | rev | cut -d'/' -f2- | rev)
export POETRY_ACTIVE=1
PATH_add "$VENV"
}
layout poetry
export PYTHONDONTWRITEBYTECODE=1
export PYTHONPATH="$PWD/project_name"
I don't know if I need to layout poetry because it is supposed to be creating virtual environments for us already but this is what I coworker recommended so I went with it. Layout poetry also didn't work without that function and it didn't like when I added it to my zshenv so I added it here.
For this specific question, the last line is the money maker.
ANOTHER SUGGESTION
See this answer: https://stackoverflow.com/a/69691436/595305
I was facing the issue which i resolved by
Installing pytest at the root of my project using pip install pytest
Adding blank __init__.py in the sibling of my test_file.py which i wanted to execute.
I have resolved it by adding export PYTHONPATH="your root dir/src"
i.e.
export PYTHONPATH="/builds/project/src"
poetry run pytest .....
The simplest solution I found was to manually add my target module to syspath. Lets say you have a structure like this:
flaskapp
- src
-- app.py
-- utils
-- ...
- tests
docs
venv
This makes my test folder a sibling to my module's src folder. If I start putting test_* files that need to import some of the module's code, I can simply:
import src.utils.calculator
And this would be fine until I try to import a file that imports another file from the module. The solution is simple: add a __init__.py to your tests folder, and put this line inside:
import sys, os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../src')))
And just modify the last part relative to your module location and folder name
For me, when I was checking my project structure I found parent directory and sub directory having same names. When I changed the directory name, I got it working. So,
# Did not work
- same_name_project/
- same_name_project/
- tests/
# Worked
- different_named_project/
- a_unique_directory/
- tests/
I've got a python project (projectA), which I've included as a git submodule in a separate project (projectB) as a subfolder. The structure is laid out as
projectB/
projectB_file.py
projectA/ (repository)
projectA/ (source code)
module1/
file1.py (contains Class1)
file2.py (contains Class2)
tests/
test_file1.py
I'm trying to figure out how to layout __init__.py files so in projectB_file.py I can import Class1 and Class2 as
from projectA.module1 import Class1
from projectA import Class2
I think having the top level projectA part of the import will be a mistake. That will require you to write your imports with projectA duplicated (e.g. import projectA.projectA.module1.file1).
Instead, you should add the top projectA folder to the module search path in some way (either as part of an install script for projectB, or with a setting in your IDE). That way, projectA as a top-level name will refer to the inner folder, which you actually intend to be the projectA package.
You'll need an __init__.py in each subdirectory you want to treat as a package. Which in your case means one in:
projectA
projectA/projectA
projectA/projectA/module1
projectA/projectA/tests
It would definitely be better you could lose that first projectA folder.
This is an annoying issue. The main approach I use is to edit the PYTHONPATH. This is essentially how I do it.
ProjectB/ # Main Repository
projectb/ # Package/library
__init__.py
projectB_file.py
docs/
tests/
setup.py # Setup file to distribute the library
freeze.py # CX_Freeze file to make executables
ProjectA/ # Subrepository
projecta/ # Package/library
__init__.py
projectA_file.py
file2.py
submodule/
__init__.py
file3.py
pa_module1/ # Additional package/library in 1 project
__init__.py
file1.py
docs/
tests/
setup.py
With this setup I have to edit the python path before running ProjectB's code. I use this structure, because it is easier for deployment and distribution. The subrepository helps track the version of ProjectA for each release of ProjectB.
cd ProjectB
export PYTHONPATH=${PWD}:${PWD}/ProjectA
python projectb/projectB_file.py
projectB_file.py
from projecta import projectA_file
This is for my development environment. For a release environment someone would install with the below code.
# Need a correct setup.py to handle 2 packages for pa_module1 and projecta
pip install ProjectA (install projecta to site-packages)
cd ..
pip install ProjectB (install projectb to site-packages)
This means projectb and projecta are in site packages. From there any file can simply
from projectb import projectB_file
from projecta import projectA_file, file2
from pa_module1 import file1
# Personally I don't like imports that use project.sub.sub.sub ...
# I have seen it cause import confusion and problems, but it is useful in some situations.
from projecta.submodule import file3
#import projecta.projectA_file
#import projecta # if __init__.py has import code
file1.Class1()
file2.Class2()
file3.Class3()
Additionally with pip you can install a library as a developer environment which links to the real project directory instead of copying files to site-packages.
pip install -e ProjectA
My project contains three Python applications. Application 1 is a web app. Applications 2 and 3 contain scripts downloading some data.
All three apps need to use a module Common containing a "model" (classes that are saved to database) and common settings.
I have no clue how to structure this project. I could create three directories, one for each application, and copy Common three times into their directories (doesn't seem right).
Another idea that comes to mind is; create a main directory and put there all files from Common, including __init__.py. Then, crete three subdirectories (submodules), one for each application.
Another way would be installing Common using pip, but that means I would have to reinstall every time I change something in that module.
In previous projects I used .NET - the equivalent in that world would be a Solution with four projects, one of them being Common.
Any suggestions?
I have a similar project that is set up like this
project_root/
App1/
__init__.py
FlaskControlPanel/
app.py
static/
templates/
models/
__init__.py
mymodels.py
Then, I run everything from project_root. I have a small script (either batch or shell depending on my environment) that sets PYTHONPATH=. so that imports work correctly. This is done because I usually develop using PyCharm, where the imports "just work", but when I deploy the final product the path doesn't match what it did in my IDE.
Once the PYTHONPATH is set to include everything from your project root, you can do standard imports.
For example, from my FlaskControlPanel app.py, I have this line:
from models.mymodels import Model1, Model2, Model3
From the App1 __init__.py I have the exact same import statement:
from models.mymodels import Model1, Model2, Model3
I can start the Flask application by running this from my command line (in Windows) while I am in the project_root directory:
setlocal
SET PYTHONPATH=.
python FlaskControlPanel\app.py
The setlocal is used to ensure the PYTHONPATH is only modified for this session.
I like this approach
projects/
__init__.py
project1/
__init__.py
project2/
__init__.py
lib1/
__init__.py
libfile.py
lib2/
__init__.py
So, I need to cd into the projects folder.
To start a projects use
python -m project_name
This allows me to easily import from any external lib like
from lib1.libfile import [imoprt what you want]
or
from lib1 import libfile
Make standard Python modules from your apps. I recommend structure like this:
apps/
common/
setup.py
common/
__init__.py
models.py
app1/
setup.py
app1/
__init__.py
models.py
project/
requirements.txt
Basic setup.py for app common:
#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name='common',
version='1.0.0',
packages=find_packages(),
zip_safe=False,
)
Make similar setup.py for other apps.
Set editable "-e" option for your apps in requirements.txt:
-e apps/common
-e apps/app1
Install requirements with pip:
$ pip install -r requirements.txt
Editable option means that source files will be linked into Python enviroment. Any change in source files of your apps will have immediate effect without reinstalling them.
Now you can import models from your common app (or any other app) anywhere (in other apps, project files, ...).
I would create a structure like this:
project_root/
app1/
__init__.py
script.py
common/
__init__.py
models.py (all "common" models)
app1/script.py
import os, sys
# add parent directory to pythonpath
basepath = os.path.join(os.path.dirname(os.path.abspath(__file__)), '..')
if basepath not in sys.path:
sys.path.append(basepath)
from common.models VeryCommonModel
print VeryCommonModel
If you don't want to set the python path at runtime, set the python path before running the script:
$ export PYTHONPATH=$PYTHONPATH:/path/to/project_root
And then you can do:
python app1/script.py
The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:
new_project/
antigravity/
antigravity.py
test/
test_antigravity.py
setup.py
etc.
My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.
I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing.
The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.
So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
The best solution in my opinion is to use the unittest command line interface which will add the directory to the sys.path so you don't have to (done in the TestLoader class).
For example for a directory structure like this:
new_project
├── antigravity.py
└── test_antigravity.py
You can just run:
$ cd new_project
$ python -m unittest test_antigravity
For a directory structure like yours:
new_project
├── antigravity
│ ├── __init__.py # make it a package
│ └── antigravity.py
└── test
├── __init__.py # also make test a package
└── test_antigravity.py
And in the test modules inside the test package, you can import the antigravity package and its modules as usual:
# import the package
import antigravity
# import the antigravity module
from antigravity import antigravity
# or an object inside the antigravity module
from antigravity.antigravity import my_object
Running a single test module:
To run a single test module, in this case test_antigravity.py:
$ cd new_project
$ python -m unittest test.test_antigravity
Just reference the test module the same way you import it.
Running a single test case or test method:
Also you can run a single TestCase or a single test method:
$ python -m unittest test.test_antigravity.GravityTestCase
$ python -m unittest test.test_antigravity.GravityTestCase.test_method
Running all tests:
You can also use test discovery which will discover and run all the tests for you, they must be modules or packages named test*.py (can be changed with the -p, --pattern flag):
$ cd new_project
$ python -m unittest discover
$ # Also works without discover for Python 3
$ # as suggested by #Burrito in the comments
$ python -m unittest
This will run all the test*.py modules inside the test package.
The simplest solution for your users is to provide an executable script (runtests.py or some such) which bootstraps the necessary test environment, including, if needed, adding your root project directory to sys.path temporarily. This doesn't require users to set environment variables, something like this works fine in a bootstrap script:
import sys, os
sys.path.insert(0, os.path.dirname(__file__))
Then your instructions to your users can be as simple as "python runtests.py".
Of course, if the path you need really is os.path.dirname(__file__), then you don't need to add it to sys.path at all; Python always puts the directory of the currently running script at the beginning of sys.path, so depending on your directory structure, just locating your runtests.py at the right place might be all that's needed.
Also, the unittest module in Python 2.7+ (which is backported as unittest2 for Python 2.6 and earlier) now has test discovery built-in, so nose is no longer necessary if you want automated test discovery: your user instructions can be as simple as python -m unittest discover.
I've had the same problem for a long time. What I recently chose is the following directory structure:
project_path
├── Makefile
├── src
│ ├── script_1.py
│ ├── script_2.py
│ └── script_3.py
└── tests
├── __init__.py
├── test_script_1.py
├── test_script_2.py
└── test_script_3.py
and in the __init__.py script of the test folder, I write the following:
import os
import sys
PROJECT_PATH = os.getcwd()
SOURCE_PATH = os.path.join(
PROJECT_PATH,"src"
)
sys.path.append(SOURCE_PATH)
Super important for sharing the project is the Makefile, because it enforces running the scripts properly. Here is the command that I put in the Makefile:
run_tests:
python -m unittest discover .
The Makefile is important not just because of the command it runs but also because of where it runs it from. If you would cd in tests and do python -m unittest discover ., it wouldn't work because the init script in unit_tests calls os.getcwd(), which would then point to the incorrect absolute path (that would be appended to sys.path and you would be missing your source folder). The scripts would run since discover finds all the tests, but they wouldn't run properly. So the Makefile is there to avoid having to remember this issue.
I really like this approach because I don't have to touch my src folder, my unit tests or my environment variables and everything runs smoothly.
I generally create a "run tests" script in the project directory (the one that is common to both the source directory and test) that loads my "All Tests" suite. This is usually boilerplate code, so I can reuse it from project to project.
run_tests.py:
import unittest
import test.all_tests
testSuite = test.all_tests.create_test_suite()
text_runner = unittest.TextTestRunner().run(testSuite)
test/all_tests.py (from How do I run all Python unit tests in a directory?)
import glob
import unittest
def create_test_suite():
test_file_strings = glob.glob('test/test_*.py')
module_strings = ['test.'+str[5:len(str)-3] for str in test_file_strings]
suites = [unittest.defaultTestLoader.loadTestsFromName(name) \
for name in module_strings]
testSuite = unittest.TestSuite(suites)
return testSuite
With this setup, you can indeed just include antigravity in your test modules. The downside is you would need more support code to execute a particular test... I just run them all every time.
From the article you linked to:
Create a test_modulename.py file and
put your unittest tests in it. Since
the test modules are in a separate
directory from your code, you may need
to add your module’s parent directory
to your PYTHONPATH in order to run
them:
$ cd /path/to/googlemaps
$ export PYTHONPATH=$PYTHONPATH:/path/to/googlemaps/googlemaps
$ python test/test_googlemaps.py
Finally, there is one more popular
unit testing framework for Python
(it’s that important!), nose. nose
helps simplify and extend the builtin
unittest framework (it can, for
example, automagically find your test
code and setup your PYTHONPATH for
you), but it is not included with the
standard Python distribution.
Perhaps you should look at nose as it suggests?
I had the same problem, with a separate unit tests folder. From the mentioned suggestions I add the absolute source path to sys.path.
The benefit of the following solution is, that one can run the file test/test_yourmodule.py without changing at first into the test-directory:
import sys, os
testdir = os.path.dirname(__file__)
srcdir = '../antigravity'
sys.path.insert(0, os.path.abspath(os.path.join(testdir, srcdir)))
import antigravity
import unittest
I noticed that if you run the unittest command line interface from your "src" directory, then imports work correctly without modification.
python -m unittest discover -s ../test
If you want to put that in a batch file in your project directory, you can do this:
setlocal & cd src & python -m unittest discover -s ../test
Solution/Example for Python unittest module
Given the following project structure:
ProjectName
├── project_name
| ├── models
| | └── thing_1.py
| └── __main__.py
└── test
├── models
| └── test_thing_1.py
└── __main__.py
You can run your project from the root directory with python project_name, which calls ProjectName/project_name/__main__.py.
To run your tests with python test, effectively running ProjectName/test/__main__.py, you need to do the following:
1) Turn your test/models directory into a package by adding a __init__.py file. This makes the test cases within the sub directory accessible from the parent test directory.
# ProjectName/test/models/__init__.py
from .test_thing_1 import Thing1TestCase
2) Modify your system path in test/__main__.py to include the project_name directory.
# ProjectName/test/__main__.py
import sys
import unittest
sys.path.append('../project_name')
loader = unittest.TestLoader()
testSuite = loader.discover('test')
testRunner = unittest.TextTestRunner(verbosity=2)
testRunner.run(testSuite)
Now you can successfully import things from project_name in your tests.
# ProjectName/test/models/test_thing_1.py
import unittest
from project_name.models import Thing1 # this doesn't work without 'sys.path.append' per step 2 above
class Thing1TestCase(unittest.TestCase):
def test_thing_1_init(self):
thing_id = 'ABC'
thing1 = Thing1(thing_id)
self.assertEqual(thing_id, thing.id)
if you run "python setup.py develop" then the package will be in the path. But you may not want to do that because you could infect your system python installation, which is why tools like virtualenv and buildout exist.
If you use VS Code and your tests are located on the same level as your project then running and debug your code doesn't work out of the box. What you can do is change your launch.json file:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python",
"type": "python",
"request": "launch",
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"program": "${file}",
"cwd": "${workspaceRoot}",
"env": {},
"envFile": "${workspaceRoot}/.env",
"debugOptions": [
"WaitOnAbnormalExit",
"WaitOnNormalExit",
"RedirectOutput"
]
}
]
}
The key line here is envFile
"envFile": "${workspaceRoot}/.env",
In the root of your project add .env file
Inside of your .env file add path to the root of your project. This will temporarily add
PYTHONPATH=C:\YOUR\PYTHON\PROJECT\ROOT_DIRECTORY
path to your project and you will be able to use debug unit tests from VS Code
Use setup.py develop to make your working directory be part of the installed Python environment, then run the tests.
Python 3+
Adding to #Pierre
Using unittest directory structure like this:
new_project
├── antigravity
│ ├── __init__.py # make it a package
│ └── antigravity.py
└── test
├── __init__.py # also make test a package
└── test_antigravity.py
To run the test module test_antigravity.py:
$ cd new_project
$ python -m unittest test.test_antigravity
Or a single TestCase
$ python -m unittest test.test_antigravity.GravityTestCase
Mandatory don't forget the __init__.py even if empty otherwise will not work.
You can't import from the parent directory without some voodoo. Here's yet another way that works with at least Python 3.6.
First, have a file test/context.py with the following content:
import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
Then have the following import in the file test/test_antigravity.py:
import unittest
try:
import context
except ModuleNotFoundError:
import test.context
import antigravity
Note that the reason for this try-except clause is that
import test.context fails when run with "python test_antigravity.py" and
import context fails when run with "python -m unittest" from the new_project directory.
With this trickery they both work.
Now you can run all the test files within test directory with:
$ pwd
/projects/new_project
$ python -m unittest
or run an individual test file with:
$ cd test
$ python test_antigravity
Ok, it's not much prettier than having the content of context.py within test_antigravity.py, but maybe a little. Suggestions are welcome.
It's possible to use wrapper which runs selected or all tests.
For instance:
./run_tests antigravity/*.py
or to run all tests recursively use globbing (tests/**/*.py) (enable by shopt -s globstar).
The wrapper can basically use argparse to parse the arguments like:
parser = argparse.ArgumentParser()
parser.add_argument('files', nargs='*')
Then load all the tests:
for filename in args.files:
exec(open(filename).read())
then add them into your test suite (using inspect):
alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj) and name.startswith("FooTest"):
alltests.addTest(unittest.makeSuite(obj))
and run them:
result = unittest.TextTestRunner(verbosity=2).run(alltests)
Check this example for more details.
See also: How to run all Python unit tests in a directory?
Following is my project structure:
ProjectFolder:
- project:
- __init__.py
- item.py
- tests:
- test_item.py
I found it better to import in the setUp() method:
import unittest
import sys
class ItemTest(unittest.TestCase):
def setUp(self):
sys.path.insert(0, "../project")
from project import item
# further setup using this import
def test_item_props(self):
# do my assertions
if __name__ == "__main__":
unittest.main()
What's the usual way of actually running the tests
I use Python 3.6.2
cd new_project
pytest test/test_antigravity.py
To install pytest: sudo pip install pytest
I didn't set any path variable and my imports are not failing with the same "test" project structure.
I commented out this stuff: if __name__ == '__main__' like this:
test_antigravity.py
import antigravity
class TestAntigravity(unittest.TestCase):
def test_something(self):
# ... test stuff here
# if __name__ == '__main__':
#
# if __package__ is None:
#
# import something
# sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
# from .. import antigravity
#
# else:
#
# from .. import antigravity
#
# unittest.main()
You should really use the pip tool.
Use pip install -e . to install your package in development mode. This is a very good practice, recommended by pytest (see their good practices documentation, where you can also find two project layouts to follow).
If you have multiple directories in your test directory, then you have to add to each directory an __init__.py file.
/home/johndoe/snakeoil
└── test
├── __init__.py
└── frontend
└── __init__.py
└── test_foo.py
└── backend
└── __init__.py
└── test_bar.py
Then to run every test at once, run:
python -m unittest discover -s /home/johndoe/snakeoil/test -t /home/johndoe/snakeoil
Source: python -m unittest -h
-s START, --start-directory START
Directory to start discovery ('.' default)
-t TOP, --top-level-directory TOP
Top level directory of project (defaults to start
directory)
This BASH script will execute the python unittest test directory from anywhere in the file system, no matter what working directory you are in.
This is useful when staying in the ./src or ./example working directory and you need a quick unit test:
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
python -m unittest discover -s "$readlink"/test -v
No need for a test/__init__.py file to burden your package/memory-overhead during production.
This way will let you run the test scripts from wherever you want without messing around with system variables from the command line.
This adds the main project folder to the python path, with the location found relative to the script itself, not relative to the current working directory.
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
Add that to the top of all your test scripts. That will add the main project folder to the system path, so any module imports that work from there will now work. And it doesn't matter where you run the tests from.
You can obviously change the project_path_hack file to match your main project folder location.
A simple solution for *nix based systems (macOS, Linux); and probably also Git bash on Windows.
PYTHONPATH=$PWD python test/test_antigravity.py
print statement easily works, unlike pytest test/test_antigravity.py. A perfect way for "scripts", but not really for unittesting.
Of course, I want to do a proper automated testing, I would consider pytest with appropriate settings.
With cwd being the root project dir (new_project in your case), you can run the following command without __init__.py in any directory:
python -m unittest discover -s test
But you need import in test_antigravity.py as:
from antigravity import antigravity.your_object
instead of:
import antigravity.your_object
If you don't like from antigravity clause, you might like Alan L's answer.
If you are looking for a command line-only solution:
Based on the following directory structure (generalized with a dedicated source directory):
new_project/
src/
antigravity.py
test/
test_antigravity.py
Windows: (in new_project)
$ set PYTHONPATH=%PYTHONPATH%;%cd%\src
$ python -m unittest discover -s test
See this question if you want to use this in a batch for-loop.
Linux: (in new_project)
$ export PYTHONPATH=$PYTHONPATH:$(pwd)/src [I think - please edit this answer if you are a Linux user and you know this]
$ python -m unittest discover -s test
With this approach, it is also possible to add more directories to the PYTHONPATH if necessary.
unittest in your project have setup.py file. try:
python3 setup.py build
and
python3 setup.py develop --user
do the work of config paths an so on. try it!