Common convention for invoking unit tests across a python project? - python

Is there a standard convention, or even a growing one, around where and how to invoke the tests associated with a project? In many projects, I'm seeing it bundled into a Make, a separate test.py script at the top level of the project, etc to do the work. 
I looked around for some common thing with setup.py, but didn't spot anything there (granted, I didn't look hard). What's common and best practice?

The short answer is yes, there's a simple convention built-in to the unittest module. See this previous question.

Related

Which Python packages aren't used in application

Can you recommend a CLI / python package to tell me which modules aren't being imported into my application?
I considered using coverage + nosetests, but many of the unwanted/not needed packages have tests (to the credit of the previous developer).
As background, I'm dealing with a legacy code base and want to remove what isn't being used so far so I can reduce my mental load before a refactor.
There's a Python module called vulture for doing exactly this. I haven't used it, but:
Vulture finds unused classes, functions and variables in your code.

Import modules from parent folder with Flask

My folder tree:
project/
app/
__init__.py
models.py
dir/test1.py
dir/__init__.py
run.py
dir/test2.py
dir/__init__.py
If I want to do a
from app.models import Whatever
from test1 and test2 the only thing that works is to manually sys.path.append something like
os.path.join(os.path.dirname(__file__), "../..")
However there are a ton of answers on SO saying messing up with sys.path will give me troubles down the line (why?); Sadly, after 1+ hour of googling I still haven't figured out the right way to do import stuff and I'm getting really confused.
It is enormously better to test than not test, so if you need to append paths to sys.path to make it work--and in this directory configuration, you will--that's a reasonable and pragmatic step.
However, in general it is better not to fiddle with module load paths manually. It assumes that your code will will always be loaded in a directory right outside your test folder, which might not always be true. But "you will run into problems down the line" is pretty weak tea.
The bigger issue is that you cannot use that little path-patch to accomplish the kind of automated testing and test management you will ultimately want/need. Much better is to learn to use a real test harness / test runner such as pytest or nose. Even better if you also use a macro test runner such as tox. The combination will automatically install your software (including any declared dependencies) in a completely new, pristine virtual environment. This will help you test not just the normal operability of your module, but also its installability. It will also help you easily run tests across different versions of Python with very little additional effort. (I currently test across a range of 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, and several versions of PyPy, for example--not because I use all those versions, but because it doesn't cost me anything extra to make sure my software runs across a large swath of the entire Python universe.)
Fair warning: Setting up your testing environment is a bit of a pill the first time out, requiring a fair amount of effort and learning of "mechanic" skills. But this is a one-time investment that will pay dividends across all of your Python work.
So long story short, patching sys.path is a fair place to start. But you'll ultimately need more generality, breadth, depth, test fixtures, and automation in your testing--and path patching can't help you with those. Upgrading to real testing tools can.

python 3 import from subdir

My project has to be extensible, i have a lot of scripts with the same interface that lookup things online. Before i was using __import__ but that does not let me put my 'plugins' on a dedicated directory:
root/
main.py
plugins/
[...]
So my question is: Is there a way to individually import modules from that subdirectory? I'm guessing importlib, but i'm so lost in how the python module loading process works... What i want to do is something like this:
for pluginname in plugins:
plugin = somekindofimport("plugins/{name}".format(name=pluginname))
plugin.unififedinterface()
Also, as a side question, the way am i trying to achieve extensibility is a good way?
I'm on python3.3
Stop thinking in terms of pathnames and start thinking in terms of packages. Read Packages in the tutorial, and if you want more detail see The import system.
But the basic idea is this:
Create a file name plugins/__init__.py. It can be empty; that's enough to turn plugins into a package. Which means you can import modules from that package with:
import plugins.plugin
So, how do you do this dynamically? That's what importlib is for. (You can also use __import__ here, but it's less flexible, and less readable in non-trivial cases, so unless you need pre-3.3 compatibility, don't.)
plugin = importlib.import_module('plugins.{name}'.format(name=pluginname))
It would probably be cleaner to import plugins to get the package, and then use relative imports from within that package, as shown in the examples in the import_module docs.
This also means Python takes care of the .pyc creation and caching, etc.
And it means that you can later expand plugins to be a "namespace package", which can be split across multiple directories like /usr/share/myapp/plugins for stock plugins, /etc/myapp/plugins for site plugins and ~/myapp/plugins for user-specific plugins.
If you really, really want to import from a directory that isn't a package, you can create a module loader and use it, but that's a whole lot of work for no actual benefit. (It's actually not that hard in 3.3 (SourceLoader and friends will do most of the work for you), but you will find almost no examples out there to guide you; instead, you'll find examples of the 2.6-3.2 way, or the 2.0-2.5 way, both of which are hard.) Plus, it means that if someone creates a plugin named, say, gzip, you can end up blocking the stdlib gzip module with the plugin. (That's especially fun if the gzip plugin tries to use the gzip stdlib module, as it likely will…) If the plugin ends up being named plugins.gzip, there's no problem.
Also, as a side question, the way am i trying to achieve extensibility is a good way?
As long as you only want to support 3.3+, yes, I think this is a great solution.
Before 3.3, using a package for plugins was a lot more problematic. People have come up with a variety of different plugin systems—in one case going so far as to dynamically create module objects and execfile into them. If you need to deal with that, I would suggest looking at existing Python apps with plugins (e.g., MusicBrainz Picard) to get different ideas.

A tool to validate the structure of a Python Package?

I started writing Python code not too long ago and everything just works, but I have been having problem writing a package. I was wondering if there is such a thing as a "package validation tool". I know, I could just start up a REPL and start importing the module but...is there a better way? Is there a tool that could tell me "you have these possible errors"?
Or maybe there is something in the middle: is there a way to test a Python's package structure?
As always, thanks in advance!
If you call a module using:
python -m module
Python will load/execute the module, so you should catch crude syntax errors. Also, if module has a block like:
if __name__ = "__main__":
do_something()
It will be called. For some small self-contained modules I often use this this block to run tests.
Given the very dynamic nature of Python, it is very hard to check for correctness if the module author is not using TTD. There is no silver bullet here. There are tools that will check for "code smells" and compliance with standards (dynamic languages tend to generate a profusion of linters).
pylint
PyChecker
PyFlakes
PEP8
A good IDE like PyCharm can help, if you like IDEs.
These tools can help, but are still far from the assurance of static languages where the compiler can catch many errors at compile time. For example, Go seems to be designed to have a very pedantic compiler. Haskell programs are said to be like mathematical proofs.
If you are coming from a Language with strong compile time checks, just relax. Python is kind of a "throw against the wall and see if it sticks", language. Some of the Python "macho" principles:
duck typing
EAFP
We are all consenting adults
There is no tool to test the package structure per se, and I'm unsure of what would be tested. Almost any structure is a valid structure...
But there are some tools to help you test your package data if you are distributing your module, they may be useful:
Pyroma will check the packages meta data.
check-manifest will check the MANIFEST.in file.
I have both of them installed and also uses zest.releaser which also has some basic sanity-checks. But none of these will check that the code is OK, so it won't look for the __init__ files, for example.

Unit tests in Python [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Does Python have a unit testing framework compatible with the standard xUnit style of test framework? If so, what is it, where is it, and is it any good?
Python has several testing frameworks, including unittest, doctest, and nose. The most xUnit-like is unittest, which is documented on Python.org.
unittest documentation
doctest documentation
I recommend nose.
It is the most Pythonic of the unit test frameworks. The test runner runs both doctests and unittests, so you are free to use whatever style of test you like.
There's testoob which is pretty complete suite of test.Also xUnit-ie, and has a nice reporting option
Consider py.test. Not exactly analogous to NUnit, but very good, with nice features including test auto-discovery and a "Watch the tests and code - when something changes rerun the tests that failed last time. As soon as all the tests pass, switch to running all the tests whenever somethings changes." option.
#Greg: PyUnit is included in the standard library as unittest
I recommend Nose.
After the reasonable simple installation, you just have to run "nosetests" in your project folder and Nose will find all your tests and run them. I also like the collection of plugins (coverage, GAE, etc.) and the abilty to call Nose directly from within my Python scripts.
There is also PyUnit which might be what you're looking for.
Never used xUnit, so I can't tell you if the frameworks are good/bad comparativly, but here is a script I wrote which uses the unittest framework (to check the API works as it should), and the doctest (to check the examples I've given work)
My only problem is checking something raises an exception is slightly convoluted (you have to pass it a function/lambda that raises the exception, rather than just the command itself, like the rest of the framework).. Other than that, it does what it should, reliably, and it has been included in the default python distribution for quite some time.
nose seems to be the best combination of flexibility and convenience. It runs unittests, doctests, coverage (with an extension) and py.test-like tests from one framework and does so admirably. It has enough popularity that it has had some IDE integration done as well for Komodo Edit and I wouldn't be surprised to see it elsewhere as well.
I like it for one strong reason: I almost always doctest before writing more extensive tests in another framework. This is because, for basic tests, doctests kill two birds with one stone. You get executable tests (although they are a bit clumsy to write well sometimes) as well as API documentation and interactive documentation at the same time. nose will run these with the bundled doctest extension when you use a command-line option (--with-doctest).
I say this having come from py.test as my former favorite. While it is great, nose tests are similar enough to me that I don't miss it, and I like the integration of the various test methodologies under one roof, so to speak. YMMV, but I recommend taking a good look at nose before choosing another. If you aren't familiar with py.test tests, you should look at them as well. I find them terrific because they are usually written in such a way that they can be easily debugged without the testing framework, which makes one less tricky system involved in the debugging session. I find that alone invaluable, while they are also easier to write than unittest tests in my opinion.

Categories