I first run
nosetests --with-coverage
So I should have a .coverage file with all the default settings.
Within folder_1, I have file_1.py, file_2.py, and file_3.py
When I cd into folder_1 and run
coverage report
It outputs:
It doesn't generate anything for file_3.py! But then when I run:
coverage report file_3.py
it says:
Does it skip files with no coverage in the report? How can I change it so the report shows me the results of every *.py file?
You need to specify a source directory for coverage.py to find files that have never been executed at all. You can use --source=folder_1 on the command line, or [run] source=folder_1 in your .coveragerc file.
I ran into this same scenario yesterday and lost some time trying make Coverage consider the corresponding to this file_3.py. Ned Batchelder's answer is completely correct and helped me but when handling multiple folder_1 folders in the same level in the hierarchy I'd have to set all of them as source and that is not ideal.
The key is this part of the official doc:
If the source option is specified, only code in those locations will be measured. Specifying the source option also enables coverage.py to report on unexecuted files, since it can search the source tree for files that haven’t been measured at all. Only importable files (ones at the root of the tree, or in directories with a __init__.py file) will be considered.
So unexecuted files will only be analysed if you point at them. For this scenario that means two options:
Set your folder directly as source directory running the tests with the flag --source=folder_1 (which is covered on Neds' answer).
If this is a subfolder of a bigger project you can also just set the source folder as the main project folder but then you need to set the directories you want analysed as packages, creating an __init__.py file in them.
For instance if you have:
src/
folder_1/
__init__.py
file_1.py
file_2.py
file_3.py
You can just run with the flag --source=src and folder_1 files will be discoverable as well.
Hope that helps someone in the future.
Related
The Python documentation states that the -t option controls the:
Top level directory of project (defaults to start directory)
Usually people use the -s option (python -m unittest discover tests/ is equivalent to python -m unittest discover -s tests), and I have never seen anyone use -t before. The brief description in the documentation is not enlightening to me.
What does the "top level directory" mean in this particular context? What exactly does the -t option do?
My understanding is that although the top-level directory defaults to whatever the starting directory is, the starting directory must be contained by the test directory.
From the first paragraph on test discovery:
Unittest supports simple test discovery. In order to be compatible with test discovery, all of the test files must be modules or packages (including namespace packages) importable from the top-level directory of the project (this means that their filenames must be valid identifiers).
Suppose you have a directory structure like
./
tests1/
tests2/
If ./ is the top-level directory and tests1 is the starting directory, no tests will discovered under tests2, even though tests2 is importable from the top-level directory.
The purpose of -s would be to discover only a subset of tests for a particular project. The purpose of -t might be to choose a particular "subproject" to run tests for.
I have an online course platform project. The tree looks like this:
app
-- edxapp
-- edx-platform
-- circle.yml
I want to run the circle.yml in the edx-platform directory. I've followed their documentation here.
First, I created a new circle.yml in the root directory so that the tree looks like this:
circle.yml
app
-- edxapp
-- edx-platform
-- circle.yml
The new circle.yml contains the following:
general:
build_dir: app/edxapp/edx-platform
But, it still didn't work. Then I tried another way. I linked the circle.yml files so that I have one circle.yml in each directory. Each circle.yml just contains the build_dir key with its value pointing to the next sub directory.
Please give me an explanation why this doesn't work. Also, please give me an alternative way to do it.
Note: The project structure has to be the same.
circle.yml needs to be in the root directory of a repository. That file in any other location will not be processed by CircleCI. Anything that you need to do (any commands, etc), needs to be done from that root file.
build-dir changes where the commands from circle.yml are run from, not where the file should be. In essence, whichever directory is set as build-dir, that becomes the working directory for commands in circle.yml. More details in CircleCI Docs.
In summary, have only 1 circle.yml directory, in the root of your repository. If you had a command ls, this would print the directory listing for ~/MY-REPO-NAME. If you set the build-dir to:
general:
build_dir: app/edxapp/edx-platform
then that same ls command will now print the directory listing of ~/MY-REPO-NAME/app/edxapp/edx-platform.
Regards,
Ricardo N Feliciano
Developer Evangelist, CircleCI
This is a broad question because no one seems to have found a solution to it as yet so I think asking to see a working example might prove more useful. So here goes:
Has anyone run a nosetests on a python project using imports of multiple files/packages?
What I mean is, do you have a directory listing such as:
project/
|
|____app/
|___main.py
|___2ndFile.py
|___3rdFile.py
|____tests/
|____main_tests.py
Where your main.py imports multiple files and you perform a nosetests from the project file of utilizing a test script in the main_tests.py file? If so please can you screen shot your import section both of all your main files and your main_tests.py file?
This seems to be a major issue in nosetests, with no apparent solution:
Nosetests Import Error
A test running with nosetests fails with ImportError, but works with python command
https://github.com/nose-devs/nose/issues/978
https://github.com/nose-devs/nose/issues/964
You can't have python modules starting with a digit, so 2ndFile.py, 3rdFile.py won't actually work (rename them).
You'll need an __init__.py inside the app directory, for it to be considered a package, so add that (it can be empty file).
You don't need an __init__.py in the tests directory!
The import statements in main_tests.py should look like from app.main import blah
The absolute path of the project directory needs to be in your sys.path. To achieve this, set an environment variable: export PYTHONPATH=/path/to/project
Now running nosetests should work.
I've taken to putting module code directly in a packages __init__.py, even for simple packages where this ends up being the only file.
So I have a bunch of packages that look like this (though they're not all called pants:)
+ pants/
\-- __init__.py
\-- setup.py
\-- README.txt
\--+ test/
\-- __init__.py
I started doing this because it allows me to put the code in a separate (and, critically, separately versionable) directory, and have it work in the same way as it would if the package were located in a single module.py. I keep these in my dev python lib directory, which I have added into $PYTHONPATH when working on such things. Each package is a separate git repo.
edit...
Compared to the typical Python package layout, as exemplified in Radomir's answer, this setup saves me from having to add each package's directory into my PYTHONPATH.
.../edit
This has worked out pretty well, but I've hit upon this (somewhat obscure) issue:
When running tests from within the package directory, the package itself, i.e. code in __init__.py, is not guaranteed to be on the sys.path. This is not a problem under my typical environment, but if someone downloads pants-4.6.tgz and extracts a tarball of the source distribution, cds into the directory, and runs python setup.py test, the package pants itself won't normally be in their sys.path.
I find this strange, because I would expect setuptools to run the tests from a parent directory of the package under test. However, for whatever reason, it doesn't do that, I guess because normally you wouldn't package things this way.
Relative imports don't work because test is a top-level package, having been found as a subdirectory of the current-directory component of sys.path.
I'd like to avoid having to move the code into a separate file and importing its public names into __init__.py. Mostly because that seems like pointless clutter for a simple module.
I could explicitly add the parent directory to sys.path from within setup.py, but would prefer not to. For one thing, this could, at least in theory, fail, e.g. if somebody decides to run the test from the root of their filesystem (presumably a Windows drive). But mostly it just feels jerry-rigged.
Is there a better way?
Is it considered particularly bad form to put code in __init__.py?
I think the standard way to package python programs would be more like this:
\-- setup.py
\-- README.txt
\--+ pants/
\-- __init__.py
\-- __main__.py
...
\--+ tests/
\-- __init__.py
...
\--+ some_dependency_you_need/
...
Then you avoid the problem.
I'm kind of a rookie with python unit testing, and particularly coverage.py. Is it desirable to have coverage reports include the coverage of your actual test files?
Here's a screenshot of my HTML report as an example.
You can see that the report includes tests/test_credit_card. At first I was trying to omit the tests/ directory from the reports, like so:
coverage html --omit=tests/ -d tests/coverage
I tried several variations of that command but I could not for the life of me get the tests/ excluded. After accepting defeat, I began to wonder if maybe the test files are supposed to be included in the report.
Can anyone shed some light on this?
coverage html --omit="*/test*" -d tests/coverage
Create .coveragerc file in your project root folder, and include the following:
[run]
omit = *tests*
Leaving this here in case if any Django developer needs a .coveragerc for their project.
[run]
source = .
omit = ./venv/*,*tests*,*apps.py,*manage.py,*__init__.py,*migrations*,*asgi*,*wsgi*,*admin.py,*urls.py
[report]
omit = ./venv/*,*tests*,*apps.py,*manage.py,*__init__.py,*migrations*,*asgi*,*wsgi*,*admin.py,*urls.py
Create a file named .coveragerc on your projects root directory, paste the above code and then just run the command:
coverage run manage.py test
In addition, if you want the tests to execute faster run this command instead.
coverage run manage.py test --keepdb --parallel
This will preserve the test DB and will run the tests in parallel.
You can specify the directories you want to exclude by creating a .coveragerc in the project root.
It supports wildcards (which you can use to exclude virtual environment) and comments (very useful for effective tracking).
The below code block shows how omit can be used (taken from the latest documentation) with multiple files and directories.
[run]
omit =
# omit anything in a .local directory anywhere
*/.local/*
# omit everything in /usr
/usr/*
# omit this single file
utils/tirefire.py
In your case, you could have the following in your .coveragerc:
[run]
omit =
# ignore all test cases in tests/
tests/*
For your question on coverage reports, you can think about testing and coverage in the following manner:
When you run pytest or unittest, all the test cases for your source code are executed
When you run coverage, it shows the part of the source code that isn't being used.
When you run pytest with coverage (something like pytest -v --cov), it runs all test cases and shows the part of the source code that isn't being used.
Extra:
You can also specify the location of your HTML report in the configuration file like:
[html]
directory = tests/coverage/html_report/
This is going to create html, js, css, etc. inside tests/coverage/html_report/ everytime you run coverage or pytest -v --cov
You can also explicitly specify which directory has the code you want coverage on instead of saying which things to omit. In a .coveragerc file, if the directory of interest is called demo, this looks like
[run]
source = demo