I am using the following command to run tests:
nosetests --with-coverage --cover-html --cover-package mypackage
I would like the coverage report to be updated, even if a developer adds new, untested, code to the package.
For example, imagine a developer adds a new module to the package but forgets to write tests for it. Since the tests may not import the new module, the code coverage may not reflect the uncovered code. Obviously this is something which could be prevented at the code review stage but it would be great to catch it even earlier.
My solution was to write a simple test which dynamically imports all modules under the top-level package. I used the following code snippet to do this:
import os
import pkgutil
for loader, name, is_pkg in pkgutil.walk_packages([pkg_dirname]):
mod = loader.find_module(name).load_module(name)
Dynamically importing sub-packages and sub-modules like this does not get picked up by the code coverage plugin in nose.
Can anyone suggest a better way to achieve this type of thing?
The problem seems to be the method for dynamically importing all packages/modules under the top-level package.
Using the method defined here seems to work. The key difference being the use of importlib instead of pkgutil. However, importlib was introduced in python 2.7 and 3.1 so this solution is not appropriate for older versions of python.
I have updated the original code snippet to use __import__ instead of the ImpLoader.load_module method. This also seems to do the trick.
import os
import pkgutil
for loader, name, is_pkg in pkgutil.walk_packages([pkg_dirname]):
mod = loader.find_module(name)
__import__(mod.fullname)
Related
I wrote a custom python package for Ansible to handle business logic for some servers I manage. I have multiple files and they reference each other by re-importing the package.
So my package named <MyCustomPackage> has functions <Function1> <Function2> <Function3>, etc all in their own files... Some of these functions reference functions in the same package, so to do that the file has:
import MyCustomPackage
at the top. I did it this way instead of a relative import because I'm also unit testing these and mocking would not work with relative paths because of a __init__ file in the test directory which was needed for test discovery. The only way I could mock was through importing the package itself. Seemed simple enough.
The problem is with Ansible. These packages are in module_utils. I import them with:
from ansible.module_utils.MyCustomPackage import MyCustomPackage
but when I use the commands I get module not found errors - and traced it back to the import MyCustomPackage statement in the package itself.
So - how should I be structuring my package? Should I try again with relative file imports, or have the package modify the path so it's found with the friendly name?
Any tips would be helpful! Or if someone has a module they've written with Python modules in module_utils and unit tests that they'd be willing to share, that'd be great also!
Many people have problems with relative imports and imports in general in Python because they are ambiguous and surprisingly depend on your current working directory (and other things).
Thus I've created an experimental, new import library: ultraimport
It gives you more control over your imports and lets you do file system based, relative imports.
Given that you have a file function1.py, to import a function from function2.py, you would then write:
import ultraimport
Function2 = ultraimport('__dir__/function2.py', 'Function2')
This will always work, no matter how you run your code. It also does not force you to a specific package structure. You can just have any files you like.
I'm self-taught in the Python world, so some of the structural conventions are still a little hazy to me. However, I've been getting very close to what I want to accomplish, but just ran into a larger problem.
Basically, I have a directory structure like this, which will sit outside of the normal python installation (this is to be distributed to people who should not have to know what a python installation is, but will have the one that comes standard with ArcGIS):
top_directory/
ArcToolbox.tbx
scripts/
ArcGIStool.py (script for the tool in the .tbx)
pythonmod/
__init__.py
general.py
xlrd/ (copied from my own python installation)
xlwt/ (copied from my own python installation)
xlutils/ (copied from my own python installation)
So, I like this directory structure, because all of the ArcGIStool.py scripts call functions within the pythonmod package (like those within general.py), and all of the general.py functions can call xlrd and xlwt functions with simple "import xlrd" statements. This means that if the user desired, he/she could just move the pythonmod folder to the python site-packages folder, and everything would run fine, even if xlrd/xlwt/xlutils are already installed.
THE PROBLEM:
Everything is great, until I try to use xlutils in general.py. Specifically, I need to "from xlutils.copy import copy". However, this sets off a cascade of import errors. One is that xlutils/copy.py uses "from xlutils.filter import process,XLRDReader,XLWTWriter". I solved this by modifying xlutils/copy.py like this:
try:
from xlutils.filter import process,XLRDReader,XLWTWriter
except ImportError:
from filter import process,XLRDReader,XLWTWriter
I thought this would work fine for other situations, but there are modules in the xlutils package that need to import xlrd. I tried following this advice, but when I use
try:
import xlrd
except ImportError:
import os, sys, imp
path = os.path.dirname(os.path.dirname(sys.argv[0]))
xlrd = imp.load_source("pythonmod.xlrd",os.path.join(path,"xlrd","__init__.py"))
I get a new import error: In xlrd/init.py, the info module is called (from xlrd/info.py), BUT when I use the above code, I get an error saying that the name "info" is not defined.
This leads me to believe that I don't really know what is going on, because I thought that when the init.py file was imported it would run just like normal and look within its containing folder for info.py. This does not seem to be the case, unfortunately.
Thanks for your interest, and any help would be greatly appreciated.
p.s. I don't want to have to modify the path variables, as I have no idea who will be using this toolset, and permissions are likely to be an issue, etc.
I realized I was using imp.load_source incorrectly. The correct syntax for what I wanted to do should have been:
imp.load_source("xlrd",os.path.join(path,"xlrd","__init__.py"))
In the end though, I ended up rewriting my code to not need xlutils at all, because I continued to have import errors that were causing many more problems than were worth dealing with.
Okay, so in the past, I've made my own Python packages with Python 2.x (most recently, 2.7.5). It has worked fine. Let me explain how I did that, for reference:
Make a directory within the working directory. We'll call it myPackage.
Make a file called __init__.py in the directory myPackage.
Make sure all the modules that you want to be part of the package are imported within __init__.py. These modules are typically in the myPackage folder.
From a Python program in the working directory, type import myPackage (and it imports fine, and is usable).
However, in Python 3, I get errors with that. (ImportError: No module named 'Whatever the first imported module is')
I researched the problem and found the following:
Starred imports don't work in Python 3.3.
The __init__.py file is not required in Python 3.3.
So, I removed the stars from my imports, and leaving the __init__.py file in, I still got errors (ImportError: No module named 'Whatever the first imported module is'). So, I removed the __init__.py file, and I don't get any errors, but my package doesn't include any of my modules.
Okay, so I discovered by doing a web search for python3 __init__.py or some such that I can do the following, although I don't have any clue if this is the standard way of doing things:
In the modules in the package, make sure there are no plain imports (not just no starred ones). Only do from myModule import stuff. However, you need to put a . in front of myModule: e.g. from .myModule import stuff. Then, I can import myPackage.oneOfMyModules
I found that by following this rule in the __init__.py file, it also works.
Once again, I don't know if this is how it's supposed to work, but it seems to work.
I found this page that is supposed to have something to do with the changes in packages in Python 3.something, but I'm not sure how it relates to what I'm doing:
http://legacy.python.org/dev/peps/pep-0420/
So, what is the standard way to do this? Where is it documented (actually saying the syntax)? Is the way I'm doing it right? Can I do regular imports instead of from package import module?
After analyzing some Python 3 packages installed on my system (I should have tried that to start with!) I discovered that they often seem to do things a little differently. Instead of just doing from .myModule import stuff they would do from myPackage.myModule import stuff (inside the modules in the package). So, that works, too, I suppose, and seems to be more frequently used.
My program reads a python script for configuration. So far I'm loading the script called lab.py like this:
self.lab_file = "/not/interesting/path/lab.py"
sys.path.insert(0, os.path.dirname(self.lab_file))
import lab as _config
But when I'm unit-testing it, I've a strange behavior:
when I launch only one unit test calling this code, it succeed
when I launch several unit tests, each of one calling this code independantly, some tests failed
Tracing the problem with logging, It seems the lab script is imported only the first time. This behavior seems coherent in respect of python but I was assuming than unit tests are isolated between each other. I am wrong ? If test are not independant in respect of the import, how can I write test to force the loading of my script each time ?
Try using reload.
For example:
import lab as _config
reload(_config)
In python 2 reload is a builtin function.
In python 3.2+ reload is in the imp module, but deprecated in 3.4+.
In python 3.4+ reload is in the importlib module.
I would suggest deleting the module from sys.modules
import sys
if 'lab' in sys.modules:
del sys.modules['lab']
import lab as _config
just deleting the import will not work because import checks in sys.modules if the module is already imported.
if you import then reload it works because it first loads the module from sys.modules into the local namespace and then reloads the module from file.
Maybe it helps if you run nose with this flag:
--with-isolation
From the nosedoc
Enable plugin IsolationPlugin: Activate the isolation plugin to isolate changes to external modules to a single test module or package. The isolation plugin resets the contents of sys.modules after each test module or package runs to its state before the test. PLEASE NOTE that this plugin should not be used with the coverage plugin, or in any other case where module reloading may produce undesirable side-effects. [NOSE_WITH_ISOLATION]
EDIT:
OK, I managed to isolate the bug and the exact, complete code to to reproduce it. But it appears either something that's by design, or a bug in python.
Create two sibling packages: admin & General, each with it's own __init__.py, of course.
In the package admin put the file 'test.py' with the following code:
from General.test02 import run
import RunStoppedException
try:
run()
except RunStoppedException.RunStoppedException,e:
print 'right'
except Exception,e:
print 'this is what i got: %s'%type(e)
and also in admin put the file 'RunStoppedException.py' with the following code:
class RunStoppedException(Exception):
def __init__(self):
Exception.__init__(self)
In the package General put the file test02.py with the code:
import admin.RunStoppedException
def run():
raise admin.RunStoppedException.RunStoppedException()
the printout:
this is what i got: <class 'admin.RunStoppedException.RunStoppedException'>
When it should've been right. This only happens when one file sits in the same dir as the exception, so they import it differently.
Is this by design, or a bug of python?
I am using python2.6, running it under eclipse+pydev
import admin.RunStoppedException
This is an ambiguous relative import. Do you mean RunStoppedException from the admin top-level module? Or from mypackage.admin when you're in a package? If your current working directory (which is added to the module search path) happens to be inside the package, it could be either, depending on whether Python knows it's inside a package, which depends on how you're running the script.
If you've got both import admin.RunStoppedException and import RunStoppedException in different modules, that could very well import two copies of the same module: a top-level RunStoppedException and a submodule admin.RunStoppedException of the package admin, resulting in two instances of the exception, and the subsequent mismatch in except.
So don't use implicit relative imports. They are in any case going away (see PEP328). Always spell out the full module name, eg. import mypackage.admin.RunStoppedException. However avoid using the same identifier for your module name and your class name as this is terribly confusing. Note that Python will allow you to say:
except RunStoppedException:
where that identifier is referring to a module and not a subclass of Exception. This is for historical reasons and may also go away, but for the meantime it can hide bugs. A common pattern would be to use mypackage.exceptions to hold many exceptions. One-class-per-file is a Java habit that is frowned on in Python.
It's also a good idea generally try to keep the importing of module contents (like classes) down as much as possible. If something changes the copy of RunStoppedException inside the module, you'll now have different copies in different scripts. Though classes mostly don't change, module-level variables may, and monkey-patching and reloading become much harder when you're taking stuff outside of its owner module.
I can only see two reasons
You have two different Exception classes with same name
Edit: I think culprit is this part because you import Exception class two ways
from RunStoppedException import RunStoppedException
from admin.RunStoppedException import RunStoppedException
make them consistent and your problem will be gone.
You are using some IDE, which is interfering with your code, this sounds bizarre but try to run your code on command line if you aren't
Even 1 and 2 doesn't fix your problem, write a small piece of code demonstrating the problem, which we can run here, which we can fix, but I am sure we will not need to because once you have written such a small standalone script where you can replicate the problem you will find the solution too.
Works fine for me:
[/tmp] ls admin/
RunStoppedException.py __init__.py test.py
RunStoppedException.pyc __init__.pyc
[/tmp] ls General/
__init__.py __init__.pyc test02.py test02.pyc
[/tmp] python -m admin.test
right
[/tmp]
Running on:
Python 2.6.4 Stackless 3.1b3 060516 (release26-maint, Dec 14 2009, 23:28:06)
[GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin
My guess is that you have another "General" on your path somewhere, perhaps from earlier tests, and thats why the exceptions don't match.
Did you try the id/inspect.getabsfile debugging? If so, what was the output?