I'm working on a system that needs to be able to test python files with py.test, and use the output (what tests passed and failed) within the program. Is there anyway to call py.test from within python, tell it to run the testing code in [name].py on the code in [otherName].py, and have it return the results of the test?
I think you are looking for Calling pytest from Python code at Usage and Invocations page.
Also limiting tests to the specific file could be done by Specifying tests / selecting tests.
In other words, this should do the trick:
pytest.main(['my_test_file.py'])
P.S.: Py.test Documantation is pretty good, you can find most of the answers there ;).
Related
I'm using GNU Emacs 24.5.1 to work on Python code. I often want to run just a single unit test. I can do this, for example, by running:
test=spi.test_views.IndexViewTest.generate_select2_data_with_embedded_spaces make test
with M-X compile. My life would be simpler if I could give some command like "Run the test where point is", and have emacs figure out the full name of the test for me. Is possible?
Update: with the folowing buffer, I'd like some command which runs M-X compile with:
test=spi.test_views.IndexViewTest.test_unknown_button make test
where spi is the name of the directory test_views.py is in. Well, technically, I need to construct the python path to my test function, but in practice, it'll be <directory.file.class.function>.
This seems like the kind of thing somebody would have already invented, but I don't see anything in the python mode docs.
I believe you use the "default" python mode, while the so-called elpy mode (that I strongly recommend giving a try when doing Python developments within Emacs) seems to provide what you are looking for:
C-c C-t (elpy-test)
Start a test run. This uses the currently configured test runner to discover
and run tests. If point is inside a test case, the test runner will run exactly
that test case. Otherwise, or if a prefix argument is given, it will run all tests.
Extra details
The elpy-test function internally relies on the function (elpy-test-at-point), which appears to be very close to the feature you mentioned in the question.
See e.g. the code/help excerpt in the following screenshot:
I am using pytest. I like the way I call pytest (re-try the failed tests first, verbose, grab and show serial output, stop at first failure):
pytest --failed-first -v -s -x
However there is one more thing I want:
I want pytest to run the new tests (ie tests never tested before) immediately after the --failed-first ones. This way, when working with tests that are long to perform, I would get most relevant information as soon as possible.
Any way to do that?
This may not be directly what you are asking about, but, my understanding is that the test execution order is important for you when you create new tests during development.
Since you are already working with these "new" tests, the pytest-ordering plugin might be a good option to consider. It allows you to influence the execution order by decorating your tests with #pytest.mark.first, #pytest.mark.second etc decorators.
pytest-ordering is able to change the execution order by using a pytest_collection_modifyitems hook. There is also pytest-random-order plugin which also uses the same hook to control/change the order.
You can also have your own hook defined and adjusted to your specific needs. For example, here another hook is used to shuffle the tests:
Dynamically control order of tests with pytest
For anyone coming to this now, pytest added a --new-first option to run new tests before all other tests. It can be combined with --failed-first to run new and failed tests. For test-driven development, I've found it helpful to use these options with pytest-watch, which I described in my blog.
I have a set of 50 odd unit tests scheduled to be run/executed using Python Nose-tests.
These 50+ unit tests are written in an order and such that, they leverage the output of the previous one, when run as one test suite.
However, Nose-tests seems to pick a order of its own and doesn't really care for the order I've configured the unit tests.
I've looked around for possible answers/references on stack overflow, but without success. Can anyone point me to any existing configuration parameters or Nose-tests Flags that can be set, to have the unit tests within the test suite to be executed in a given order?
Thanks in advance.
Two answers,
unittests should be written in such a way that you don't need the outcome of the previous test to be able to continue running the other tests... that's the whole idea about abstracting the "units" that you're testing. If you really want a scenario test then write it as one test.
as stated by jeberle test are run alphabetically. if you really want you can
def test_01_foo():
assert foo()
def test_02_bar():
assert bar()
Nose runs the tests in a specific order:
Like py.test, nose runs functional tests in the order in which they
appear in the module file. TestCase-derived tests and other test
classes are run in alphabetical order.
More here: nose.readthedocs.io
I am using nosetests --with-coverage to test and see code coverage of my unit tests. The class that I test has many external dependencies and I mock all of them in my unit test.
When I run nosetests --with-coverage, it shows a really long list of all the imports (including something I don't even know where it is being used).
I learned that I can use .coveragerc for configuration purposes but it seems like I cannot find a helpful instruction on the web.
My questions are..
1) In which directory do I need to add .coveragerc? How do I specify the directories in .coveragerc? My tests are in a folder called "tests"..
/project_folder
/project_folder/tests
2)It is going to be a pretty long list if I were to add each in omit= ...
What is the best way to only show the class that I am testing with the unittest in the coverage report?
It would be nice if I could get some beginner level code examples for .coveragerc. Thanks.
The simplest way to direct coverage.py's focus is to use the source option, usually source=. to indicate that you only want to measure code in the current working tree.
You can also use the --cover-package=PACKAGE option. For example:
nosetests --with-coverage --cover-package=module_you_are_testing
See http://nose.readthedocs.org/en/latest/plugins/cover.html for details.
I wrote a python module. Running python filename.py, only checks for syntax errors. Is there a tool, which checks for runtime errors also, like concatenating int with string etc..
Thank you
Bala
Update:
Scripts are mainly about setting up a hadoop cluster in the cloud. I am not sure how I can write a unit test, because everything runs in the cloud. You can think of code as legacy code, and I just added more logging and some extra conditions a few places
Traditionally, if not writing full-fledged unit-tests and/or doc-tests (writing lots of tests is of course best practice!), one at least puts in every module a def main(): function to exercise it and ends the module with
if __name__ == '__main__':
main()
so main() won't get in the way if the module's just imported, but it will execute if you run the module as your main script. Of course, you need to actually exercise the code in the module from within main(), for this to catch all kinds of semantic problems such as the type error you mention -- doing a really thorough job this way is often as hard as writing real unit tests and doc tests would be, but you can at least get started!
You could write a unit test for your module. That way it will execute your code and any runtime errors (or even better, test failures) will be reported.
If you choose to go down this route, http://docs.python.org/library/unittest.html would probably be a good place to start. Alternatively, as Alex wrote, you can just put code at the bottom of your module that will execute when the module is run directly. This is more expedient and probably a better first approach, although if you have a lot of modules you may want a more structured approach.
You can give a try to pyanalyze. It is able to detect possible run-time errors without running the program.
pip3 install pyanalyze
python3 -m pyanalyze file.py