So, I have a project tested using pytest and using the pytest-cov module to report coverage.
Some of the code involves bootstrapping and configuration and the canonical way of running this is starting via a shell script. The current unit tests use the subprocess module to test running this shell script on mocked data. I would like the code report against the coverage and I am trying specifically to avoid
1) Heavily modifying the wrapper to support the test scenario. Also, this runs the risk of doing 2).
2) Running the boostrap code outside the wrapper (e.g. by forking the process and running the code directly), since I want these tests to be as realistic as possible.
Is there any (canoical, Pythonic) way of propagating the coverage collection to all subproceses, even when launched using subprocess.Popen? I can easily solve the problem using a hack, so this is not something I am looking for.
This actually works out of the box. The reason for me thinking this did not work was that the path mappings w.r.t. the Docker volumes were incorrect, as the modules loaded by the subprocess were bind mounted in the container. Coverage is only reported when the paths match up exactly.
My current use case is that I use travis-ci very happily to run my test cases for a python project. This reports a fail or pass based on whether the py.unit tests pass.
I would like to add pep8 checking to this repository as well, but I don't want my core functionality tests to fail if there is some incorrectly formatted code, but I would like to know about it.
Any possible ways of dealing with this would be useful, but my immediate thought was, is there any way of having 2 separate test runners, running off the same repository? ".travis.yml" running the main tests, and a separate process monitoring my pep8 compliance from ".travis2.yml" for example.
I would then have 2 jobs running, and could see if my core functionality tests are still OK at a glance(from the github badge for example), but also how my pep8 compliance is going.
Thanks
Mark
From http://docs.travis-ci.com/user/customizing-the-build/ :
Travis CI uses .travis.yml file in the root of your repository to
learn about your project and how you want your builds to be executed.
A mixture of matrix and allow_failurescould be used in the single .travis.yml file to address your use case of having two jobs run where one build reports your functionality tests and a second build gives you feedback on your pep8 compliance,
For example, the following .travis.yml file cause two builds to occur on traivs. In only one of the builds (i.e. where PEP=true), the pep8 check would occur. If the pep8-check failed it wouldn't be considered a failure due to allow_failures:
language: python
env:
- PEP=true
- PEP=false
matrix:
allow_failures:
- env: PEP=true
script:
- if $PEP ; then pep8 ; fi
- python -m unittest discover
I have a third party software which is able to run some python scripts using something like:
software.exe -script pythonscript.py
My company is heavily dependent on this software as well as on the scripts we develop for it. Currently we have some QA that checks the output of the scripts, but we really want to start unit testing the scripts to make it easier to find bugs and make the test system more complete.
My problem is how is it possible to run "embedded" unit tests? We use pydev+eclipse and I tried to use it's remote debbuging to make it work with the unit tests, but I cannot really make it work. How can I make the server connection "feed" the unit test?
The other idea would be to parse the stdout of the software, but that would not really be a unit test... And the added complexity it seems to bring makes this approach less interesting.
I would expect that something like this has already been done somewhere else and I tried googling for it, but maybe I am just not using the correct keywords. Could anyone give me a starting point?
Thank you
A bit more info would be helpful. Are you using a testing framework (e.g. unittest or nose), or if not, how are the tests structured? What is software.exe?
In python, unit tests are really nothing more than a collection of functions which raise an exception on failure, so they can be called from a script like any other function. In theory, therefore, you can simply create a test runner (if you're not already using one such as nose), and run it as software.exe -script runtests.py. In pydev, you can set up software.exe as a customised python interpreter.
If the problem is that software.exe hides stdout, then simply write the results to a log file instead. You could also create a environment which mocks that provided by software.exe and run the tests using python.exe.
If unit tests are for your code and not for the functionality provided by software.exe then you could run the tests using a standalone python mocking software.exe parts where necessary. As an intermediate step you could try to run unittest-based scripts using `software.exe'
Well, generally speaking, testing software shall be done by a Continuous Integration suite (And Jenkins is your friend).
Now, I think you'll have to test your scripts pythonscript.py by setting a test() function inside the python script that will emulate the possible environments you'll give to the entry point of your script. And you'll be able to use unittest to execute the test functions of all your scripts. You can also embed tests in doctests, but I personally don't like that.
And then, in your software.exe, you'll be able to execute tests by emulating all the environment combinations. But as you don't say much about software.exe I won't be able to help you more... (what language ? is software.exe already unit tested ?)
nose
is a test runner which extends PyUnit. Is it possible to write e.g
$ nosetests --with-shell myTest.py -myargs test
If not, then is there a plugin, or do i need to develop it myself.
Any suggestions ?
Nose is not a general test harness. It's specifically a Python harness which runs Python unit tests.
So, while you can write extensions for it to execute scripts and mark them as successes or failures based on the exit status or an output string, I think it's an attempt to shoehorn the harness into doing something it's not really meant to do.
You should package your tests as Python functions or classes and then have them use a library to run external scripts the output or behaviour of which is translated into something that nose can interpret rather than extend nose to directly run scripts.
Also, I've experimented with nose a bit and found it's extension mechanism quite clumsy compared to py.test. You might want to give that a shot.
Basically, growl notifications (or other callbacks) when tests break or pass. Does anything like this exist?
If not, it should be pretty easy to write.. Easiest way would be to..
run python-autotest myfile1.py myfile2.py etc.py
Check if files-to-be-monitored have been modified (possibly just if they've been saved).
Run any tests in those files.
If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass.
Wait, and repeat steps 2-5.
The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc..
The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement?
To summarise:
Is there anything like the Ruby tool autotest (part of the ZenTest package), but for Python code?
How do you check which functions have changed between two revisions of a script?
Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback)
I found autonose to be pretty unreliable but sniffer seems to work very well.
$ pip install sniffer
$ cd myproject
Then instead of running "nosetests", you run:
$ sniffer
Or instead of nosetests --verbose --with-doctest, you run:
$ sniffer -x--verbose -x--with-doctest
As described in the readme, it's a good idea to install one of the platform-specific filesystem-watching libraries, pyinotify, pywin32 or MacFSEvents (all installable via pip etc)
autonose created by gfxmonk:
Autonose is an autotest-like tool for python, using the excellent nosetest library.
autotest tracks filesystem changes and automatically re-run any changed tests or dependencies whenever a file is added, removed or updated. A file counts as changed if it has iself been modified, or if any file it imports has changed.
...
Autonose currently has a native GUI for OSX and GTK. If neither of those are available to you, you can instead run the console version (with the --console option).
I just found this: http://www.metareal.org/p/modipyd/
I'm currently using thumb.py, but as my current project transitions from a small project to a medium sized one, I've been looking for something that can do a bit more thorough dependency analysis, and with a few tweaks, I got modipyd up and running pretty quickly.
Guard is an excellent tool that monitors for file changes and triggers tasks automatically. It's written in Ruby, but it can be used as a standalone tool for any task like this. There's a guard-nosetests plugin to run Python tests via nose.
Guard supports cross-platform notifications (Linux, OSX, Windows), including Growl, as well as many other great features. One of my can't-live-without dev tools.
One very useful tool that can make your life easier is entr. Written in C, and uses kqueue or inotify under the hood.
Following command runs your test suite if any *.py file in your project is changed.
ls */**.py | entr python -m unittest discover -s test
Works for BSD, Mac OS, and Linux. You can get entr from Homebrew.
Maybe buildbot would be useful http://buildbot.net/trac
For your third question, maybe the trace module is what you need:
>>> def y(a): return a*a
>>> def x(a): return y(a)
>>> import trace
>>> tracer = trace.Trace(countfuncs = 1)
>>> tracer.runfunc(x, 2)
4
>>> res = tracer.results()
>>> res.calledfuncs
{('<stdin>', '<stdin>', 'y'): 1, ('<stdin>', '<stdin>', 'x'): 1}
res.calledfuncs contains the functions that were called. If you specify countcallers = 1 when creating the tracer, you can get caller/callee relationships. See the docs of the trace module for more information.
You can also try to get the calls via static analysis, but this can be dangerous due to the dynamic nature of Python.
Django's development server has a file change monitor that watches for modifications and automatically reloads itself. You could re-use this code to launch unit tests on file modification.
Maybe Nose http://somethingaboutorange.com/mrl/projects/nose/ has a plugin http://somethingaboutorange.com/mrl/projects/nose/doc/writing_plugins.html
Found this: http://jeffwinkler.net/2006/04/27/keeping-your-nose-green/
You can use nodemon for the task, by watching .py files and execute manage.py test. The command will be: nodemon --ext py --exec "python manage.py test".
nodemon is an npm package however, I assume you have node installed.
Check out pytddmon. Here is a video demonstration of how to use it:
http://pytddmon.org/?page_id=33