How do I test a standlone script file - python

I've written a module to be installed via pip with a dir structure like:
bin/myapp
src/mymodule
src/mymodule/__init__.py
src/mymodule/config.py
src/mymodule/file1.py
src/mymodule/file2.py
tests/func/
tests/unit/file1/test_func1.py
tests/unit/file1/test_func2.py
tests/unit/file2/test_func1.py
setup.py
setup.cfg
setup.py contains:
scripts=['bin/myapp'],
myapp imports mymodule and is the "wrapper" script that executes the module code as needed. e.g. myapp contains:
import mymodule
def main(config_file):
mymodule.read_config(config_file)
...
mymodule.do_something_else
...
if __name__ == '__main__':
...
main(config_file)
I want to write a test under 'func/' (func/test_myapp.py) that will setup a directory structure and then execute "myapp" which calls my module for an end to end test (I will need to mock some functions as they do call real executables that wont exist on a test machine).
But I cant seem to find a good article that tells me how to import 'bin/myapp' so that it can be tested.
Any help is appreciated.
P.S. Code example must be compatible with both 2.7 and 3.x hence both are tagged.

Instead of putting logic in bin/myapp, put one or more functions in mymodule/cli.py and use an entrypoint: https://packaging.python.org/specifications/entry-points/
You'll minimize the boilerplate you write and the logic will be importable in your tests and for anyone else who installs the package - you may find it more convenient to call the function directly in some cases than shell out if you want more explicit control in other clients.
Your exact question of testing is addressed here as well: http://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point

one way to do it might be to subprocess the bin in the test, and then test each expected item after the bin is done executing. so like in test_func1.py
something like
import subprocess
def test1():
stdout = subprocess.check_output(['bin/myapp'])
assert os.listdir('./newfolder') == ['blah1', 'blah2']
assert stdout == '''some expected output'''
has the benefit of being closer to an integration test, but also has the downside of actually being closer to an integration test so might need to clean after testing.
or you can:
def test1():
from myapp import main
assert main()

Related

Unit testing __main__.py

I have a Python package (Python 3.6, if it makes a difference) that I've designed to run as 'python -m package arguments' and I'd like to write unit tests for the __main__.py module. I specifically want to verify that it sets the exit code correctly. Is it possible to use runpy.run_module to execute my __main__.py and test the exit code? If so, how do I retrieve the exit code?
To be more clear, my __main__.py module is very simple. It just calls a function that has been extensively unit tested. But when I originally wrote __main__.py, I forgot to pass the result of that function to exit(), so I would like unit tests where the main function is mocked to make sure the exit code is set correctly. My unit test would look something like:
#patch('my_module.__main__.my_main', return_value=2)
def test_rc2(self, _):
"""Test that rc 2 is the exit code."""
sys.argv = ['arg0', 'arg1', 'arg2', …]
runpy.run_module('my_module')
self.assertEqual(mod_rc, 2)
My question is, how would I get what I’ve written here as ‘mod_rc’?
Thanks.
Misko Hevery has said before (I believe it was in Clean Code Talks: Don't Look for Things but I may be wrong) that he doesn't know how to effectively unit test main methods, so his solution is to make them so simple that you can prove logically that they work if you assume the correctness of the (unit-tested) code that they call.
For example, if you have a discrete, tested unit for parsing command line arguments; a library that does the actual work; and a discrete, tested unit for rendering the completed work into output, then a main method that calls all three of those in sequence is assuredly going to work.
With that architecture, you can basically get by with just one big system test that is expected to produce something other than the "default" output and it'll either crash (because you wired it up improperly) or work (because it's wired up properly and all of the individual parts work).
At this point, I'm dropping all pretense of knowing what I'm talking about. There is almost assuredly a better way to do this, but frankly you could just write a shell script:
python -m package args
test $? -eq [expected exit code]
That will exit with error iff your program outputs incorrectly, which TravisCI or similar will regard as build failing.
__main__.py is still subject to normal __main__ global behavior — which is to say, you can implement your __main__.py like so
def main():
# Your stuff
if __name__ == "__main__":
main()
and then you can test your __main__ in whatever testing framework you like by using
from your_package.__main__ import main
As an aside, if you are using argparse, you will probably want:
def main(arg_strings=None):
# …
args = parser.parse_args(arg_strings)
# …
if __name__ == "__main__":
main()
and then you can override arg strings from a unit test simply with
from your_package.__main__ import main
def test_main():
assert main(["x", "y", "z"]) == …
or similar idiom in you testing framework.
With pytest, I was able to do:
import mypkgname.__main__ as rtmain
where mypkgname is what you've named your app as a package/module. Then just running pytest as normal worked. I hope this helps some other poor soul.

PyCharm + cProfile + py.test --> pstat snapshot view + call graph are empty

In PyCharm, I set up py.test as the default test runner.
I have a simple test case:
import unittest
import time
def my_function():
time.sleep(0.42)
class MyTestCase(unittest.TestCase):
def test_something(self):
my_function()
Now I run the test by right-clicking the file and choosing Profile 'py.test in test_profile.py'.
I see the test running successfully in the console (it says collected 1 items). However, the Statistics/Call Graph view showing the generated pstat file is empty and says Nothing to show.
I would expect to see profiling information for the test_something and my_function. What am I doing wrong?
Edit 1:
If I change the name of the file to something which does not start with test_, remove the unittest.TestCase and insert a __main__ method calling my_function, I can finally run cProfile without py.test and I see results.
However, I am working on a large project with tons of tests. I would like to directly profile these tests instead of writing extra profiling scripts. Is there a way to call the py.test test-discovery module so I can retrieve all tests of the project recursively? (the unittest discovery will not suffice since we yield a lot of parametrized tests in generator functions which are not recognized by unittest). This way I could at least solve the problem with only 1 additional script.
Here is a work-around. Create an additional python script with the following contents (adapt the path to the tests-root accordingly):
import os
import pytest
if __name__ == '__main__':
source_dir = os.path.dirname(os.path.abspath(__file__))
test_dir = os.path.abspath(os.path.join(source_dir, "../"))
pytest.main(test_dir, "setup.cfg")
The script filename must not start with test_, else pycharm will force you to run it with py.test. Then right-click the file and run it with Profile.
This also comes in handy for running it with Coverage.

Is it possible to run all unit test?

I have two module with two different classes and their corresponding test classes.
foo.py
------
class foo(object):
def fooMethod(self):
// smthg
bar.py
------
class bar(object):
def barMethod(self):
// smthg
fooTest.py
------
class fooTest(unittest.TestCase):
def fooMethodTest(self):
// smthg
barTest.py
------
class barTest(unittest.TestCase):
def barMethodTest(self):
// smthg
In any, test and source module, file, I erased the if __name__ == "__main__": because of increasing coherency and obeying object-oriented ideology.
Like in Java unit test, I'm looking for creating a module to run all unittest. For example,
runAllTest.py
-------------
class runAllTest(unittest.TestCase):
?????
if __name__ == "__main__":
?????
I looked for search engine but didn't find any tutorial or example. Is it possible to do so? Why? or How?
Note: I'm using eclipse and pydev distribution on windows machine.
When running unit tests based on the built-in python unittest module, at the root level of your project run
python -m unittest discover <module_name>
For the specific example above, it suffices to run
python -m unittest discover .
https://docs.python.org/2/library/unittest.html
You could create a TestSuite and run all your tests in it's if __name__ == '__main__' block:
import unittest
def create_suite():
test_suite = unittest.TestSuite()
test_suite.addTest(fooTest())
test_suite.addTest(barTest())
return test_suite
if __name__ == '__main__':
suite = create_suite()
runner=unittest.TextTestRunner()
runner.run(suite)
If you do not want to create the test cases manually look at this quesiton/answer, which basically creates the test cases dynamically, or use some of the features of the unittest module like test discovery feature and command line options ..
I think what you are looking for is the TestLoader. With this you can load specific tests or modules or load everything under a given directory. Also, this post has some useful examples using a TestSuite instance.
EDIT: The code I usually have in my test.py:
if not popts.tests:
suite = unittest.TestLoader().discover(os.path.dirname(__file__)+'/tests')
#print(suite._tests)
# Print outline
lg.info(' * Going for Interactive net tests = '+str(not tvars.NOINTERACTIVE))
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
else:
lg.info(' * Running specific tests')
suite = unittest.TestSuite()
# Load standard tests
for t in popts.tests:
test = unittest.TestLoader().loadTestsFromName("tests."+t)
suite.addTest(test)
# Run
unittest.TextTestRunner(verbosity=popts.verbosity).run(suite)
Does two things:
If -t flag (tests) is not present, find and load all tests in directory
Else, load the requested tests one-by-one
I think you could just run the following command under the folder where your tests files are located:
python -m unittest
as mentioned here in the doc that "when executed without arguments Test Discovery is started"
With PyDev right click on a folder in Eclipse and choose "Run as-> Python unit-test". This will run all tests in that folder (the names of the test files and methods have to start with "test_".)
You are looking for nosetests.
You might need to rename your files; I'm not sure about the pattern nose uses to find the test files but, personally, I use *_test.py. It is possible to specify a custom pattern which your project uses for test filenames but I remember being unable to make it work so I ended up renaming my tests instead.
You also need to follow PEP 328 conventions to work with nose. I don't use IDEs with Python but your IDE may already follow it---just read the PEP and check.
With a PEP 328 directory/package structure, you can run individual tests as
nosetests path.to.class_test
Note that instead of the usual directory separators (/ or \), I used dots.
To run all tests, simply invoke nosetests at the root of your project.

Good way to collect programmatically generated test suites in nose or pytest

Say I've got a test suite like this:
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
I am currently doing the following:
suite = unittest.TestSuite()
loader = unittest.TestLoader()
safetests = loader.loadTestsFromTestCase(SafeTests)
suite.addTests(safetests)
if TARGET != 'prod':
unsafetests = loader.loadTestsFromTestCase(BombTests)
suite.addTests(unsafetests)
unittest.TextTestRunner().run(suite)
I have major problem, and one interesting point
I would like to be using nose or py.test (doestn't really matter which)
I have a large number of different applications that are exposing these test
suites via entry points.
I would like to be able to aggregate these custom tests across all installed
applications so I can't just use a clever naming convention. I don't
particularly care about these being exposed through entry points, but I
do care about being able to run tests across applications in
site-packages. (Without just importing... every module.)
I do not care about maintaining the current dependency on
unittest.TestCase, trashing that dependency is practically a goal.
EDIT This is to confirm that #Oleksiy's point about passing args to
nose.run does in fact work with some caveats.
Things that do not work:
passing all the files that one wants to execute (which, weird)
passing all the modules that one wants to execute. (This either executes
nothing, the wrong thing, or too many things. Interesting case of 0, 1 or
many, perhaps?)
Passing in the modules before the directories: the directories have to come
first, or else you will get duplicate tests.
This fragility is absurd, if you've got ideas for improving it I welcome
comments, or I set up
a github repo with my
experiments trying to get this to work.
All that aside, The following works, including picking up multiple projects
installed into site-packages:
#!python
import importlib, os, sys
import nose
def runtests():
modnames = []
dirs = set()
for modname in sys.argv[1:]:
modnames.append(modname)
mod = importlib.import_module(modname)
fname = mod.__file__
dirs.add(os.path.dirname(fname))
modnames = list(dirs) + modnames
nose.run(argv=modnames)
if __name__ == '__main__':
runtests()
which, if saved into a runtests.py file, does the right thing when run as:
runtests.py project.tests otherproject.tests
For nose you can have both tests in place and select which one to run using attribute plugin, which is great for selecting which tests to run. I would keep both tests and assign attributes to them:
from nose.plugins.attrib import attr
#attr("safe")
class SafeTests(unittest.TestCase):
# snip 20 test functions
class BombTests(unittest.TestCase):
# snip 10 different test cases
For you production code I would just call nose with nosetests -a safe, or setting NOSE_ATTR=safe in your os production test environment, or call run method on nose object to run it natively in python with -a command line options based on your TARGET:
import sys
import nose
if __name__ == '__main__':
module_name = sys.modules[__name__].__file__
argv = [sys.argv[0], module_name]
if TARGET == 'prod':
argv.append('-a slow')
result = nose.run(argv=argv)
Finally, if for some reason your tests are not discovered you can explicitly mark them as test with #istest attribute (from nose.tools import istest)
This turned out to be a mess: Nose pretty much exclusively uses the
TestLoader.load_tests_from_names function (it's the only function tested in
unit_tests/test_loader)
so since I wanted to actually load things from an arbitrary python object I
seemed to need to write my own figure out what kind of load function to use.
Then, in addition, to correctly get things to work like the nosetests script
I needed to import a large number of things. I'm not at all certain that this
is the best way to do things, not even kind of. But this is a stripped down
example (no error checking, less verbosity) that is working for me:
import sys
import types
import unittest
from nose.config import Config, all_config_files
from nose.core import run
from nose.loader import TestLoader
from nose.suite import ContextSuite
from nose.plugins.manager import PluginManager
from myapp import find_test_objects
def load_tests(config, obj):
"""Load tests from an object
Requires an already configured nose.config.Config object.
Returns a nose.suite.ContextSuite so that nose can actually give
formatted output.
"""
loader = TestLoader()
kinds = [
(unittest.TestCase, loader.loadTestsFromTestCase),
(types.ModuleType, loader.loadTestsFromModule),
(object, loader.loadTestsFromTestClass),
]
tests = None
for kind, load in kinds.items():
if isinstance(obj, kind) or issubclass(obj, kind):
log.debug("found tests for %s as %s", obj, kind)
tests = load(obj)
break
suite = ContextSuite(tests=tests, context=obj, config=config)
def main():
"Actually configure the nose config object and run the tests"
config = Config(files=all_config_files(), plugins=PluginManager())
config.configure(argv=sys.argv)
tests = []
for group in find_test_objects():
tests.append(load_tests(config, group))
run(suite=tests)
If your question is, "How do I get pytest to 'see' a test?", you'll need to prepend 'test_' to each test file and each test case (i.e. function). Then, just pass the directories you want to search on the pytest command line and it will recursively search for files that match 'test_XXX.py', collect the 'test_XXX' functions from them and run them.
As for the docs, you can try starting here.
If you don't like the default pytest test collection method, you can customize it using the directions here.
If you are willing to change your code to generate a py.test "suite" (my definition) instead of a unittest suite (tech term), you may do so easily. Create a file called conftest.py like the following stub
import pytest
def pytest_collect_file(parent, path):
if path.basename == "foo":
return MyFile(path, parent)
class MyFile(pytest.File):
def collect(self):
myname="foo"
yield MyItem(myname, self)
yield MyItem(myname, self)
class MyItem(pytest.Item):
SUCCEEDED=False
def __init__(self, name, parent):
super(MyItem, self).__init__(name, parent)
def runtest(self):
if not MyItem.SUCCEEDED:
MyItem.SUCCEEDED = True
print "good job, buddy"
return
else:
print "you sucker, buddy"
raise Exception()
def repr_failure(self, excinfo):
return ""
Where you will be generating/adding your code into your MyFile and MyItem classes (as opposed to the unittest.TestSuite and unittest.TestCase). I kept the naming convention of MyFile class that way, because it is intended to represent something that you read from a file, but of course you can basically decouple it (as I've done here). See here for an official example of that. The only limit is that in the way I've written this foo must exist as a file, but you can decouple that too, e.g. by using conftest.py or whatever other file name exist in your tree (and only once, otherwise everything will run for each files that matches -- and if you don't do the if path.basename test for every file that exists in your tree!!!)
You can run this from command line with
py.test -whatever -options
or programmactically from any code you with
import pytest
pytest.main("-whatever -options")
The nice thing with py.test is that you unlock many very powerful plugings such as html report

python nosetests equivalent of unittest Testsuite in the test file

In nosetests, I know that you can specify which tests you want to run via a nosetests config file as such:
[nosetests]
tests=testIWT_AVW.py:testIWT_AVW.tst_bynd1,testIWT_AVW.py:testIWT_AVW.tst_bynd3
However, the above just looks messy and becomes harder to maintain when a lot of tests are added, especially without being able to use linebreaks. I found it a lot more convenient to be able to specify which tests I want to run using unittests TestSuite feature. e.g.
def custom_suite():
suite = unittest.TestSuite()
suite.addTest(testIWT_AVW('tst_bynd1'))
suite.addTest(testIWT_AVW('tst_bynd3'))
return suite
if __name__=="__main__":
runner = unittest.TextTestRunner()
runner.run(custom_suite())
Question: How do I specify which tests should be run by nosetests within my .py file? Thanks.
P.S. If there is a way to specify tests via a nosetest config file that doesn't force all tests to be written on one line I would be open to it as well, as a second alternative
I'm not entirely sure whether you want to run the tests programmatically or from the command line. Either way this should cover both:
import itertools
from nose.loader import TestLoader
from nose import run
from nose.suite import LazySuite
paths = ("/path/to/my/project/module_a",
"/path/to/my/project/module_b",
"/path/to/my/project/module_c")
def run_my_tests():
all_tests = ()
for path in paths:
all_tests = itertools.chain(all_tests, TestLoader().loadTestsFromDir(path))
suite = LazySuite(all_tests)
run(suite=suite)
if __name__ == '__main__':
run_my_tests()
Note that the nose.suite.TestLoader object has a number of different methods available for loading tests.
You can call the run_my_tests method from other code or you can run this from the command line with a python interpreter, rather than through nose. If you have other nose configuration, you may need to pass that in programmatically as well.
If I'm correctly understanding your question, you have several options here:
you can mark your tests with special nose decorators: istest and nottest. See docs
you can mark tests with tags
you can join test cases in test suites. I haven't used it by myself, but it seems that you have to override nose's default test discovery to respect your test suites (see docs)
Hope that helps.

Categories