Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Does Python have a unit testing framework compatible with the standard xUnit style of test framework? If so, what is it, where is it, and is it any good?
Python has several testing frameworks, including unittest, doctest, and nose. The most xUnit-like is unittest, which is documented on Python.org.
unittest documentation
doctest documentation
I recommend nose.
It is the most Pythonic of the unit test frameworks. The test runner runs both doctests and unittests, so you are free to use whatever style of test you like.
There's testoob which is pretty complete suite of test.Also xUnit-ie, and has a nice reporting option
Consider py.test. Not exactly analogous to NUnit, but very good, with nice features including test auto-discovery and a "Watch the tests and code - when something changes rerun the tests that failed last time. As soon as all the tests pass, switch to running all the tests whenever somethings changes." option.
#Greg: PyUnit is included in the standard library as unittest
I recommend Nose.
After the reasonable simple installation, you just have to run "nosetests" in your project folder and Nose will find all your tests and run them. I also like the collection of plugins (coverage, GAE, etc.) and the abilty to call Nose directly from within my Python scripts.
There is also PyUnit which might be what you're looking for.
Never used xUnit, so I can't tell you if the frameworks are good/bad comparativly, but here is a script I wrote which uses the unittest framework (to check the API works as it should), and the doctest (to check the examples I've given work)
My only problem is checking something raises an exception is slightly convoluted (you have to pass it a function/lambda that raises the exception, rather than just the command itself, like the rest of the framework).. Other than that, it does what it should, reliably, and it has been included in the default python distribution for quite some time.
nose seems to be the best combination of flexibility and convenience. It runs unittests, doctests, coverage (with an extension) and py.test-like tests from one framework and does so admirably. It has enough popularity that it has had some IDE integration done as well for Komodo Edit and I wouldn't be surprised to see it elsewhere as well.
I like it for one strong reason: I almost always doctest before writing more extensive tests in another framework. This is because, for basic tests, doctests kill two birds with one stone. You get executable tests (although they are a bit clumsy to write well sometimes) as well as API documentation and interactive documentation at the same time. nose will run these with the bundled doctest extension when you use a command-line option (--with-doctest).
I say this having come from py.test as my former favorite. While it is great, nose tests are similar enough to me that I don't miss it, and I like the integration of the various test methodologies under one roof, so to speak. YMMV, but I recommend taking a good look at nose before choosing another. If you aren't familiar with py.test tests, you should look at them as well. I find them terrific because they are usually written in such a way that they can be easily debugged without the testing framework, which makes one less tricky system involved in the debugging session. I find that alone invaluable, while they are also easier to write than unittest tests in my opinion.
Related
We have been using Django for a long time. Some old code is not being used now. How can I find which code is not being used any more and remove them.
I used coverage.py with unit tests, which works fine and shows which part of code is never used, but the test covered is very low. Is there any way to use it with WSGI server to find which code have never served any web requests?
It depends on what you mean by unused code.
For unreachable dead code, like functions are were never called, classes that are never instantiated, you can use a pure static code analyzer to find them. Pylint is a good option. Bear in mind that this is not 100% accurate, false positive is possible:
# static analysis can't detect methods called this way
func = getattr(obj, "func_name")
func()
For code that are reachable, but never reached. You have to rely on tools like coverage.py, and improve your test coverage.
On a well tested project, coverage would be ideal but with some untested legacy code I don't think there is a magical tool.
You could write a big test loading all the pages and run coverage to get some indication.
Cowboy style:
If it's not some critical code and you're fairly sure it's unused (i.e. not handling payments, etc.). Comment it out, check that the tests pass, deploy and wait a week or so before removing it definitely (or putting it back if you got a notification).
As other answers indicate coverage.py accurately finds out which parts of the code are never executed, but coverage requires your code to be actually run to perform the analysis. Vulture on the other hand, runs static analysis for finding dead (unused code) for Python Programs. Also, if you run vulture on both your library and test suite, you might be able to find untested code.
Vulture is a standard PyPI package and can be installed using pip:
$ pip install vulture
Run vulture using the command:
$ vulture apps/ tests/ --exclude settings
Bear in mind that due to Python's dynamic nature, there may be some false positives, but they can be dealt with by the means of "Whitelists" - Please refer to this answer by Rahul for more information on how to use Vulture with django and tackle false positives.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I've started working on a rather big (multithreaded) Python project, with loads of (unit)tests. The most important problem there is that running the application requires a preset environment, which is implemented by a context manager. So far we made use of a patched version of the unit test runner that would run the tests inside this manager, but that doesn't allow switching context between different test modules.
Both nose and pytest do support such a thing because they support fixtures at many granularities, so we're looking into switching to nose or pytest. Both these libraries would also support 'tagging' tests and run only these tagged subsets, which is something we also would like to do.
I have been looking through the documentation of both nose and pytest a bit, and as far as I can see the bigger part of those libraries essentially support the same functionality, except that it may be named differently, or require slightly different syntax. Also, I noted some small differences in the available plugins (nose has multiprocess-support, pytest doesn't seem to for instance)
So it seems, the devil is in the detail, which means (often at least) in personal taste and we better go with the library that fits our personal taste best.
So I'd to ask for a subjective argumentation why I should be going with nose or pytest in order to choose the library/community combo that best fits our needs.
I used to use Nose because it was the default with Pylons. I didn't like it at all. It had configuration tendrils in multiple places, virtually everything seemed to be done with an underdocumented plugin which made it all even more indirect and confusing, and because it did unittest tests by default, it regularly broke with Unicode tracebacks, hiding the sources of errors.
I've been pretty happy with py.test the last couple years. Being able to just write a test with assert out of the box makes me hate writing tests way less, and hacking whatever I need atop the core has been pretty easy. Rather than a fixed plugin interface it just has piles of hooks, and pretty understandable source code should you need to dig further. I even wrote an adapter for running Testify tests under py.test, and had more trouble with Testify than with py.test.
That said, I hear nose has plugins for classless tests and assert introspection nowadays, so you'll probably do fine with either. I still feel like I can hit the ground running with py.test, though, and I can understand what's going on when it breaks.
Is there a standard convention, or even a growing one, around where and how to invoke the tests associated with a project? In many projects, I'm seeing it bundled into a Make, a separate test.py script at the top level of the project, etc to do the work.
I looked around for some common thing with setup.py, but didn't spot anything there (granted, I didn't look hard). What's common and best practice?
The short answer is yes, there's a simple convention built-in to the unittest module. See this previous question.
I have a very large python project with a very large test suite. Recently we have decided to quantify the quality of our test-coverage.
I'm looking for a tool to automate the test coverage report generation. Ideally I'd like to have attractive, easy to read reports but I'd settle for less attractive reports if I could make it work quickly.
I've tried Nose, which is not good enough: It is incompatible with distribute / setuptools' namespace package feature. Unfortunately nose coverage will never work for us since we make abundant use of this feature. That's a real shame because Nose seems to work really nicely in Hudson (mostly)
As an alternative, I've heard that there's a way to do a Python coverage analysis in Eclipse, but I've not quite locked-down the perfect technique.
Any suggestions welcome!
FYI we use Python 2.4.4 on Windows XP 32bit
Have you tried using coverage.py? It underlies "nose coverage", but can be run perfectly well outside of nose if you need to.
If you run your tests with (hypothetically) python run_my_tests.py, then you can measure coverage with coverage run run_my_tests.py, then get HTML reports with coverage html.
From your description, I'm not sure what problem you had with nose, especially whether it was a nose issue, or a coverage.py issue. Provide some more details, and I'm sure we can work through them.
Ned has already mentioned his excellent coverage.py module.
If the problem you're having is something nose specific, you might want to consider using another test runner. I've used py.test along with the pytest_coverage plugin that lets you generate coverage statistics. It also has a pytest_nose plugin to help you migrate.
However, I don't understand exactly what the problem you're facing is. Can you elaborate a little on the "distribute / setuptools' namespace package feature" you mentioned? I'm curious to know what the problem is.
I'm writing quite a few unit tests and using nosetests to run them. Nose certainly makes it nice and easy to run tests, but the output can be pretty cluttered at the best of times, and a downright mess at others, depending on warnings and errors.
I was wondering if there are any visual tools that wrap nose so that the feedback cleaner. A results grid showing test names, run times and indicating success/failure status with colours would be a huge visual aide. Better still, if it could split up the error messages and tracebacks on a case by case basis, it would really cut down on the amount of clutter when a large number of tests fail on a major change.
This is starting to read like a wishlist, but does anything even close to this exist?
There's a very similar question here.
It doesn't look like there's a standardized GUI for nosetests (as in, standardized for just the python interface). There seems to be a few GUI plugins for nosetests depending on which IDE you're using - a quick search brought up this link for Komodo.
However, there are many other GUI based unit testing frameworks out there for Python - have you tried PyUnit?
Well, what you can do is to use the jenkins integration server, we use it with py.test and phpunit (it is really simple to use it with any other testing framework), it provides us some fancy (and useful!) graphs containing
Number of tests that run
Number of tests passing/failing
The stack trace for the failed tests
Execution times for tests
Code coverage for the tests
You can also relate it to git/svn/mercurial...
Here you have an example of how to configure hudson (for the record, jenkins is a fork of hudson, so the instructions are still valid :), in order to provide the elements from your wishlist.
An easier option is to use a IDE as PyCharm, it has integration with nosetests, and will tell you which tests passed/failed and its times, however, jenkis stores historical data.
Good luck :)
Have you taken a look at the Nose plugins page? A quick glance revealed at least a couple that stated they could provide a GUI interface.