Emulating "nosetests -w working-dir" behavior using py.test - python

I am porting a series of tests from nosetests + python unittest to py.test. I was pleasantly surprised to learn that py.test supports python unittests and running existing tests with py.test is just as easy as calling py.test instead of nosetests on the command line. However I am having problems with specifying the working directory for the tests. They are not in the root project dir but in a subdir. Currently the tests are run like this:
$ nosetests -w test-dir/ tests.py
which changes the current working directory to test-dir and run all the tests in tests.py. However when I use py.test
$ py.test test-dir/tests.py
all the tests in tests.py are run but the current working directory is not changed to test-dir. Most tests assume that the working directory is test-dir and try to open and read files from it which obviously fails.
So my question is how to change the current working directory for all tests when using py.test.
The are a lot of tests and I don't want to invest the time to fix them all and make them work regardless of the cwd.
Yes, I can simply do cd test-dir; py.test tests.py but I am used to working from the project root directory and don't want to cd every time I want to run a test.
Here is some code that may give you better idea what I am trying to achieve:
content of tests.py:
import unittest
class MyProjectTestCase(unittest.TestCase):
def test_something(self):
with open('testing-info.txt', 'r') as f:
test something with f
directory layout:
my-project/
test-dir/
tests.py
testing-info.txt
And then when I try to run the tests:
$ pwd
my-project
$ nosetests -w test-dir tests.py
# all is fine
$ py.test ttest-dir/tests.py
# tests fail because they cannot open testing-info.txt

So this is the best I could come up with:
# content of conftest.py
import pytest
import os
def pytest_addoption(parser):
parser.addoption("-W", action="store", default=".",
help="Change current working dir before running the collected tests.")
def pytest_sessionstart(session):
os.chdir(session.config.getoption('W'))
And then when running the tests
$ py.test -W test-dir test-dir/tests.py
It's not clean but it will do the trick until I fix all the tests.

Related

How to fix pytest marker not picking up tests

I have my tests at src/com/xyz/tests/bla_tests/file1.py. I also have file2.py at that path.
I have the following code in file1.py
class HelloWorld():
#pytest.mark.regressiontest
def test_001_bla?():
# Some code here.
From command line I invoke the test by :
pytest -v -m regressiontest but no items are collected. It says : collected 0 items.
My aim is to mark multiple tests across various files as regressiontest and be able to run them from the Command line. (On Mac).
The following command is able to collect the tests though
pytest -q file1.py -v -m regressiontest
This says: collected 20 items / 11 deselected / 9 selected
Can someone help me out?
Got the resolution here:
pytest by default doesn't look for tests in files not named like
test_*.py
also using --basetemp=. actually does a rm -rf . so that's def not a
good idea, we should warn about that
In my case, my file name was not starting from test_*
When you invoke the pytest -v -m regressiontest , are you on your current folder tests ?
I believe you should do pytest src/com/xyz/tests/bla_tests/ -v -m regressiontest. Hope this helps.

Python unittest does not run tests

I made a small project called demo, with a single test in it
import unittest
class Test(unittest.TestCase):
def testName1(self):
self.assertEqual(5+9, 14)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
However, from command line
ThinkPad-T520:~/workspacep/demo$ python -m unittest
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Why doesn't this work? In general, how can I run all unit tests from command line with a single line?
The structure of the directory is
demo
tests
demo_test1.py __init__.py
There are three gotcha's that I know of:
Your tests in your TestCases need to be named test_*
Your test files need to be named: test*.py (by default, you can change it with the -p flag when running the tests). e.g. test_demo1.py
Your tests folder needs to have an __init__.py file in it, or else it won't be considered a valid location to import from.
So, for #1, you need to rename the test to test_name_1. And for #2, there's two options:
A - Restructure your files like this:
demo
tests
__init__.py
test_demo1.py
Then run python -m unittest and it should find the test cases.
B - Just run it like: python -m unittest discover -p *test.py
I fought with the same exact problem a while ago and I solved it by using test discovery command.
python -m unittest discover -s .
You can pass in your test file pattern as well and a whole other options https://docs.python.org/2/library/unittest.html#test-discovery
You need to pass in a list of modules.
For example, if your test file is foo.py, then you can run python -m unittest foo.
For me (running tests from IntelliJ IDEA) I had to remove the class' 'Run configuration'. Earlier on I had wrongly imported the unittest as _pytest.unittest and ran the test class. Of course that didn't work.
I corrected the module import, but the 'run configuration' was still there, causing it to run as a Python script and not as 'Python tests'.
I had a similar issue and figured out that I had to run python3 -m unittest instead as I had forgotten python defaults to Python 2 on my system

Coverage failing on Travis but not on local machine - error depends on order of flags

I'm have the following script section in my .travis.yml file:
script:
# run all tests in mymodule/tests and check coverage of the mymodule dir
# don't report on coverage of files in the mymodule/tests dir itself
- coverage run -m --source mymodule --omit mymodule/tests/* py.test mymodule/tests -v
This works fine on my own (Windows) machine, but throws an error on both Linux and OSX on the Travis build. The error is:
Import by filename is not supported.
With the flags in a different order I see a different error (only on the Linux build - the OSX tests pass with this order of the flags):
-coverage run --source eppy --omit eppy/tests/* -m py.test eppy/tests -v
Can't find '__main__' module in 'mymodule/tests/geometry_tests'
What am I doing wrong here?
Solved by changing from using coverage directly to using pytest-cov.
script:
# run all tests in mymodule/tests and check coverage of the mymodule dir
- py.test -v --cov-config .coveragerc --cov=mymodule mymodule/tests
And the .coveragerc file:
# .coveragerc to control coverage.py
[run]
# don't report on coverage of files in the tests dir itself
omit =
mymodule/tests/*
I don't know why this works where the previous approach didn't, but this at least solves the problem.

run nosetests in all subdirectories

I can run tests in workflow folder with nosetests:
workflow maks$ nosetests
..........
----------------------------------------------------------------------
Ran 10 tests in 0.093s
OK
my tests live in test folder:
workflow maks$ ls
__pycache__ iterations.py test
data iterationsClass.py testData
env iterationsClass.pyc
But when I move to parent dir:
(py3env)Makss-Mac:workflow maks$ cd ..
It cannot find tests.
(py3env)Makss-Mac:Packages maks$ nosetests
----------------------------------------------------------------------
Ran 0 tests in 0.005s
OK
So how to make nosetest search tests in all subdirectories?
If you make your workflow folder a module by placing __init__.py in it, nose should be able to find your tests.
If you don't want to make your folder a module, put a setup.cfg in the directory you want to run nosetests which automatically passes the tests option to nosetests to specify relative location of tests. Like so:
#setup.cfg contents:
[nosetests]
exe = True
tests = workflow/
You can also pass multiple folders/tests using this same mechanism:
#setup.cfg contents:
[nosetests]
exe = True
tests = workflow/, other/dir/full/of/tests/, direct/file/my_test.py

Running unittest with typical test directory structure

The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:
new_project/
antigravity/
antigravity.py
test/
test_antigravity.py
setup.py
etc.
My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.
I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing.
The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.
So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
The best solution in my opinion is to use the unittest command line interface which will add the directory to the sys.path so you don't have to (done in the TestLoader class).
For example for a directory structure like this:
new_project
├── antigravity.py
└── test_antigravity.py
You can just run:
$ cd new_project
$ python -m unittest test_antigravity
For a directory structure like yours:
new_project
├── antigravity
│   ├── __init__.py # make it a package
│   └── antigravity.py
└── test
├── __init__.py # also make test a package
└── test_antigravity.py
And in the test modules inside the test package, you can import the antigravity package and its modules as usual:
# import the package
import antigravity
# import the antigravity module
from antigravity import antigravity
# or an object inside the antigravity module
from antigravity.antigravity import my_object
Running a single test module:
To run a single test module, in this case test_antigravity.py:
$ cd new_project
$ python -m unittest test.test_antigravity
Just reference the test module the same way you import it.
Running a single test case or test method:
Also you can run a single TestCase or a single test method:
$ python -m unittest test.test_antigravity.GravityTestCase
$ python -m unittest test.test_antigravity.GravityTestCase.test_method
Running all tests:
You can also use test discovery which will discover and run all the tests for you, they must be modules or packages named test*.py (can be changed with the -p, --pattern flag):
$ cd new_project
$ python -m unittest discover
$ # Also works without discover for Python 3
$ # as suggested by #Burrito in the comments
$ python -m unittest
This will run all the test*.py modules inside the test package.
The simplest solution for your users is to provide an executable script (runtests.py or some such) which bootstraps the necessary test environment, including, if needed, adding your root project directory to sys.path temporarily. This doesn't require users to set environment variables, something like this works fine in a bootstrap script:
import sys, os
sys.path.insert(0, os.path.dirname(__file__))
Then your instructions to your users can be as simple as "python runtests.py".
Of course, if the path you need really is os.path.dirname(__file__), then you don't need to add it to sys.path at all; Python always puts the directory of the currently running script at the beginning of sys.path, so depending on your directory structure, just locating your runtests.py at the right place might be all that's needed.
Also, the unittest module in Python 2.7+ (which is backported as unittest2 for Python 2.6 and earlier) now has test discovery built-in, so nose is no longer necessary if you want automated test discovery: your user instructions can be as simple as python -m unittest discover.
I've had the same problem for a long time. What I recently chose is the following directory structure:
project_path
├── Makefile
├── src
│ ├── script_1.py
│ ├── script_2.py
│ └── script_3.py
└── tests
├── __init__.py
├── test_script_1.py
├── test_script_2.py
└── test_script_3.py
and in the __init__.py script of the test folder, I write the following:
import os
import sys
PROJECT_PATH = os.getcwd()
SOURCE_PATH = os.path.join(
PROJECT_PATH,"src"
)
sys.path.append(SOURCE_PATH)
Super important for sharing the project is the Makefile, because it enforces running the scripts properly. Here is the command that I put in the Makefile:
run_tests:
python -m unittest discover .
The Makefile is important not just because of the command it runs but also because of where it runs it from. If you would cd in tests and do python -m unittest discover ., it wouldn't work because the init script in unit_tests calls os.getcwd(), which would then point to the incorrect absolute path (that would be appended to sys.path and you would be missing your source folder). The scripts would run since discover finds all the tests, but they wouldn't run properly. So the Makefile is there to avoid having to remember this issue.
I really like this approach because I don't have to touch my src folder, my unit tests or my environment variables and everything runs smoothly.
I generally create a "run tests" script in the project directory (the one that is common to both the source directory and test) that loads my "All Tests" suite. This is usually boilerplate code, so I can reuse it from project to project.
run_tests.py:
import unittest
import test.all_tests
testSuite = test.all_tests.create_test_suite()
text_runner = unittest.TextTestRunner().run(testSuite)
test/all_tests.py (from How do I run all Python unit tests in a directory?)
import glob
import unittest
def create_test_suite():
test_file_strings = glob.glob('test/test_*.py')
module_strings = ['test.'+str[5:len(str)-3] for str in test_file_strings]
suites = [unittest.defaultTestLoader.loadTestsFromName(name) \
for name in module_strings]
testSuite = unittest.TestSuite(suites)
return testSuite
With this setup, you can indeed just include antigravity in your test modules. The downside is you would need more support code to execute a particular test... I just run them all every time.
From the article you linked to:
Create a test_modulename.py file and
put your unittest tests in it. Since
the test modules are in a separate
directory from your code, you may need
to add your module’s parent directory
to your PYTHONPATH in order to run
them:
$ cd /path/to/googlemaps
$ export PYTHONPATH=$PYTHONPATH:/path/to/googlemaps/googlemaps
$ python test/test_googlemaps.py
Finally, there is one more popular
unit testing framework for Python
(it’s that important!), nose. nose
helps simplify and extend the builtin
unittest framework (it can, for
example, automagically find your test
code and setup your PYTHONPATH for
you), but it is not included with the
standard Python distribution.
Perhaps you should look at nose as it suggests?
I had the same problem, with a separate unit tests folder. From the mentioned suggestions I add the absolute source path to sys.path.
The benefit of the following solution is, that one can run the file test/test_yourmodule.py without changing at first into the test-directory:
import sys, os
testdir = os.path.dirname(__file__)
srcdir = '../antigravity'
sys.path.insert(0, os.path.abspath(os.path.join(testdir, srcdir)))
import antigravity
import unittest
I noticed that if you run the unittest command line interface from your "src" directory, then imports work correctly without modification.
python -m unittest discover -s ../test
If you want to put that in a batch file in your project directory, you can do this:
setlocal & cd src & python -m unittest discover -s ../test
Solution/Example for Python unittest module
Given the following project structure:
ProjectName
├── project_name
| ├── models
| | └── thing_1.py
| └── __main__.py
└── test
├── models
| └── test_thing_1.py
└── __main__.py
You can run your project from the root directory with python project_name, which calls ProjectName/project_name/__main__.py.
To run your tests with python test, effectively running ProjectName/test/__main__.py, you need to do the following:
1) Turn your test/models directory into a package by adding a __init__.py file. This makes the test cases within the sub directory accessible from the parent test directory.
# ProjectName/test/models/__init__.py
from .test_thing_1 import Thing1TestCase
2) Modify your system path in test/__main__.py to include the project_name directory.
# ProjectName/test/__main__.py
import sys
import unittest
sys.path.append('../project_name')
loader = unittest.TestLoader()
testSuite = loader.discover('test')
testRunner = unittest.TextTestRunner(verbosity=2)
testRunner.run(testSuite)
Now you can successfully import things from project_name in your tests.
# ProjectName/test/models/test_thing_1.py
import unittest
from project_name.models import Thing1 # this doesn't work without 'sys.path.append' per step 2 above
class Thing1TestCase(unittest.TestCase):
def test_thing_1_init(self):
thing_id = 'ABC'
thing1 = Thing1(thing_id)
self.assertEqual(thing_id, thing.id)
if you run "python setup.py develop" then the package will be in the path. But you may not want to do that because you could infect your system python installation, which is why tools like virtualenv and buildout exist.
If you use VS Code and your tests are located on the same level as your project then running and debug your code doesn't work out of the box. What you can do is change your launch.json file:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python",
"type": "python",
"request": "launch",
"stopOnEntry": false,
"pythonPath": "${config:python.pythonPath}",
"program": "${file}",
"cwd": "${workspaceRoot}",
"env": {},
"envFile": "${workspaceRoot}/.env",
"debugOptions": [
"WaitOnAbnormalExit",
"WaitOnNormalExit",
"RedirectOutput"
]
}
]
}
The key line here is envFile
"envFile": "${workspaceRoot}/.env",
In the root of your project add .env file
Inside of your .env file add path to the root of your project. This will temporarily add
PYTHONPATH=C:\YOUR\PYTHON\PROJECT\ROOT_DIRECTORY
path to your project and you will be able to use debug unit tests from VS Code
Use setup.py develop to make your working directory be part of the installed Python environment, then run the tests.
Python 3+
Adding to #Pierre
Using unittest directory structure like this:
new_project
├── antigravity
│   ├── __init__.py # make it a package
│   └── antigravity.py
└── test
├── __init__.py # also make test a package
└── test_antigravity.py
To run the test module test_antigravity.py:
$ cd new_project
$ python -m unittest test.test_antigravity
Or a single TestCase
$ python -m unittest test.test_antigravity.GravityTestCase
Mandatory don't forget the __init__.py even if empty otherwise will not work.
You can't import from the parent directory without some voodoo. Here's yet another way that works with at least Python 3.6.
First, have a file test/context.py with the following content:
import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
Then have the following import in the file test/test_antigravity.py:
import unittest
try:
import context
except ModuleNotFoundError:
import test.context
import antigravity
Note that the reason for this try-except clause is that
import test.context fails when run with "python test_antigravity.py" and
import context fails when run with "python -m unittest" from the new_project directory.
With this trickery they both work.
Now you can run all the test files within test directory with:
$ pwd
/projects/new_project
$ python -m unittest
or run an individual test file with:
$ cd test
$ python test_antigravity
Ok, it's not much prettier than having the content of context.py within test_antigravity.py, but maybe a little. Suggestions are welcome.
It's possible to use wrapper which runs selected or all tests.
For instance:
./run_tests antigravity/*.py
or to run all tests recursively use globbing (tests/**/*.py) (enable by shopt -s globstar).
The wrapper can basically use argparse to parse the arguments like:
parser = argparse.ArgumentParser()
parser.add_argument('files', nargs='*')
Then load all the tests:
for filename in args.files:
exec(open(filename).read())
then add them into your test suite (using inspect):
alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj) and name.startswith("FooTest"):
alltests.addTest(unittest.makeSuite(obj))
and run them:
result = unittest.TextTestRunner(verbosity=2).run(alltests)
Check this example for more details.
See also: How to run all Python unit tests in a directory?
Following is my project structure:
ProjectFolder:
- project:
- __init__.py
- item.py
- tests:
- test_item.py
I found it better to import in the setUp() method:
import unittest
import sys
class ItemTest(unittest.TestCase):
def setUp(self):
sys.path.insert(0, "../project")
from project import item
# further setup using this import
def test_item_props(self):
# do my assertions
if __name__ == "__main__":
unittest.main()
What's the usual way of actually running the tests
I use Python 3.6.2
cd new_project
pytest test/test_antigravity.py
To install pytest: sudo pip install pytest
I didn't set any path variable and my imports are not failing with the same "test" project structure.
I commented out this stuff: if __name__ == '__main__' like this:
test_antigravity.py
import antigravity
class TestAntigravity(unittest.TestCase):
def test_something(self):
# ... test stuff here
# if __name__ == '__main__':
#
# if __package__ is None:
#
# import something
# sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
# from .. import antigravity
#
# else:
#
# from .. import antigravity
#
# unittest.main()
You should really use the pip tool.
Use pip install -e . to install your package in development mode. This is a very good practice, recommended by pytest (see their good practices documentation, where you can also find two project layouts to follow).
If you have multiple directories in your test directory, then you have to add to each directory an __init__.py file.
/home/johndoe/snakeoil
└── test
├── __init__.py
└── frontend
└── __init__.py
└── test_foo.py
└── backend
└── __init__.py
└── test_bar.py
Then to run every test at once, run:
python -m unittest discover -s /home/johndoe/snakeoil/test -t /home/johndoe/snakeoil
Source: python -m unittest -h
-s START, --start-directory START
Directory to start discovery ('.' default)
-t TOP, --top-level-directory TOP
Top level directory of project (defaults to start
directory)
This BASH script will execute the python unittest test directory from anywhere in the file system, no matter what working directory you are in.
This is useful when staying in the ./src or ./example working directory and you need a quick unit test:
#!/bin/bash
this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"
python -m unittest discover -s "$readlink"/test -v
No need for a test/__init__.py file to burden your package/memory-overhead during production.
This way will let you run the test scripts from wherever you want without messing around with system variables from the command line.
This adds the main project folder to the python path, with the location found relative to the script itself, not relative to the current working directory.
import sys, os
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
Add that to the top of all your test scripts. That will add the main project folder to the system path, so any module imports that work from there will now work. And it doesn't matter where you run the tests from.
You can obviously change the project_path_hack file to match your main project folder location.
A simple solution for *nix based systems (macOS, Linux); and probably also Git bash on Windows.
PYTHONPATH=$PWD python test/test_antigravity.py
print statement easily works, unlike pytest test/test_antigravity.py. A perfect way for "scripts", but not really for unittesting.
Of course, I want to do a proper automated testing, I would consider pytest with appropriate settings.
With cwd being the root project dir (new_project in your case), you can run the following command without __init__.py in any directory:
python -m unittest discover -s test
But you need import in test_antigravity.py as:
from antigravity import antigravity.your_object
instead of:
import antigravity.your_object
If you don't like from antigravity clause, you might like Alan L's answer.
If you are looking for a command line-only solution:
Based on the following directory structure (generalized with a dedicated source directory):
new_project/
src/
antigravity.py
test/
test_antigravity.py
Windows: (in new_project)
$ set PYTHONPATH=%PYTHONPATH%;%cd%\src
$ python -m unittest discover -s test
See this question if you want to use this in a batch for-loop.
Linux: (in new_project)
$ export PYTHONPATH=$PYTHONPATH:$(pwd)/src [I think - please edit this answer if you are a Linux user and you know this]
$ python -m unittest discover -s test
With this approach, it is also possible to add more directories to the PYTHONPATH if necessary.
unittest in your project have setup.py file. try:
python3 setup.py build
and
python3 setup.py develop --user
do the work of config paths an so on. try it!

Categories