How can I be sure of the unittest methods' order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
You can disable it by setting sortTestMethodsUsing to None:
import unittest
unittest.TestLoader.sortTestMethodsUsing = None
For pure unit tests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state?
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, and then you are no longer testing the state, but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unit test" frameworks for component testing...so stop and ask yourself, am I doing unit testing or component testing?
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class), 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file.
In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from NumPy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent.
They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later.
If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
I half agree with the idea that tests shouldn't be ordered. In some cases it helps (it's easier, damn it!) to have them in order... after all, that's the reason for the 'unit' in UnitTest.
That said, one alternative is to use mock objects to mock out and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more information, check out Mock, which is part of the standard library now.
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, and then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
#classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
#classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
#classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
#classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
Why do you need a specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the database setup code or create testing data ad nauseam.)
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Don't rely on the order. If they use some common state, like the filesystem or database, then you should create setUp and tearDown methods that get your environment into a testable state, and then clean up after the tests have run.
Each test should assume that the environment is as defined in setUp, and should make no further assumptions.
You should try the proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is truly awesome.
For example, if test case #1 from module A should depend on test case #3 from module B you CAN set this behaviour using the library.
From unittest — Unit testing framework
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set the order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out this Stack Overflow question for details.
Here is a simpler method that has the following advantages:
No need to create a custom TestCase class.
No need to decorate every test method.
Use the unittest standard load test protocol. See the Python docs here.
The idea is to go through all the test cases of the test suites given to the test loader protocol and create a new suite but with the tests ordered by their line number.
Here is the code:
import unittest
def load_ordered_tests(loader, standard_tests, pattern):
"""
Test loader that keeps the tests in the order they were declared in the class.
"""
ordered_cases = []
for test_suite in standard_tests:
ordered = []
for test_case in test_suite:
test_case_type = type(test_case)
method_name = test_case._testMethodName
testMethod = getattr(test_case, method_name)
line = testMethod.__code__.co_firstlineno
ordered.append( (line, test_case_type, method_name) )
ordered.sort()
for line, case_type, name in ordered:
ordered_cases.append(case_type(name))
return unittest.TestSuite(ordered_cases)
You can put this in a module named order_tests and then in each unittest Python file, declare the test loader like this:
from order_tests import load_ordered_tests
# This orders the tests to be run in the order they were declared.
# It uses the unittest load_tests protocol.
load_tests = load_ordered_tests
Note: the often suggested technique of setting the test sorter to None no longer works because Python now sorts the output of dir() and unittest uses dir() to find tests. So even though you have no sorting method, they still get sorted by Python itself!
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and verify" function that constructs the object and validates its assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
There are scenarios where the order can be important and where setUp and Teardown come in as too limited. There's only one setUp and tearDown method, which is logical, but you can only put so much information in them until it gets unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
It might not be the best solution, but I often use one method that kicks off the actual tests:
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
See the example of WidgetTestCase on Organizing test code. It says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
I have implemented a plugin, nosedep, for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments, this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs. hours).
A minimal example is:
def test_a:
pass
#depends(before=test_a)
def test_b:
pass
To ensure that test_b is always run before test_a.
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
It seems they are executed in alphabetical order by test name (using the comparison function between strings).
Since tests in a module are also only executed if they begin with "test", I put in a number to order the tests:
class LoginTest(unittest.TestCase):
def setUp(self):
driver.get("http://localhost:2200")
def tearDown(self):
# self.driver.close()
pass
def test1_check_at_right_page(self):
...
assert "Valor" in driver.page_source
def test2_login_a_manager(self):
...
submit_button.click()
assert "Home" in driver.title
def test3_is_manager(self):
...
Note that numbers are not necessarily alphabetical - "9" > "10" in the Python shell is True for instance. Consider using decimal strings with fixed 0 padding (this will avoid the aforementioned problem) such as "000", "001", ... "010"... "099", "100", ... "999".
Contrary to what was said here:
tests have to run in isolation (order must not matter for that)
and
ordering them is important because they describe what the system do and how the developer implements it.
In other words, each test brings you information of the system and the developer logic.
So if this information is not ordered it can make your code difficult to understand.
To randomise the order of test methods you can monkey patch the unittest.TestLoader.sortTestMethodsUsing attribute
if __name__ == '__main__':
import random
unittest.TestLoader.sortTestMethodsUsing = lambda self, a, b: random.choice([1, 0, -1])
unittest.main()
The same approach can be used to enforce whatever order you need.
Related
I've really tried to start isolating my unit tests so I can pinpoint where errors occur rather than having my entire screen turn red with failures when one thing goes wrong. It's been working in all instances except when something in an initializer fails.
Check out these tests:
#setup_directory(test_path)
def test_filename(self):
flexmock(lib.utility.time).should_receive('timestamp_with_random').and_return(1234)
f = SomeFiles(self.test_path)
assert f.path == os.path.join(self.test_path, '1234.db')
#setup_directory(test_path)
def test_filename_with_suffix(self):
flexmock(lib.utility.time).should_receive('timestamp_with_random').and_return(1234)
f = SomeFiles(self.test_path, suffix='.txt')
assert f.path == os.path.join(self.test_path, '1234.txt')
I'm mocking dependent methods so that the thing I'm testing is completely isolated. What you notice is that the class needs to be instantiated for every single test. If an error is introduced in the initializer, every single test fails.
This is the offending constructor that calls the class's initializer:
SomeFiles(*args)
Is there a way to isolate or mock the initializer or object constructor?
I'm not sure what testing packages you're using, but in general, you can usually just mock the __init__() call on the class before actually attempting to use it. Something like
def my_init_mock_fn(*args, **kwargs):
print 'mock_init'
SomeFiles.__init__ = my_init_mock_fn
SomeFiles()
This isn't probably exactly what you want as from this point on SomeFiles.__init__ fn will always be the mock fn, but there are utilities like voidspace mock that provide a patch function that allow you to patch the class just for a specific scope.
from mock import patch
with patch.object(SomeFiles, '__init__', my_init_mock_fn):
SomeFiles()
..other various tests...
SomeFiles() #__init__ is reset to original __init__ fn
I'm sure there's probably similar functionality in whatever mocking package you are using.
Just realized you're using flexmock, there's a page for replace_with here.
What's causing the initialising function to fail? Maybe that's a bug that you should be looking into.
Another thing you can do, instead of mocking the object constructor, is simply mocking its return values. ie: Given this input, I expect this output -- so I'm going to use this expected output whether or not it returned correctly.
You can also stop testing on first failure. (failfast)
You also might want to reconsider how your tests are set up. If you have to recreate two files for every test, maybe ask yourself why. Could your tests be structured that you set up the two files, then run a series of tests, rinse and repeat. This would make it so only the series of tests assigned to that path fail, helping you isolate why it failed at all.
I have created a python class Test_getFileSize for use with nose
relevant sections:
def __init__(self,mytestfile="./filetest",testsize=102400):
''' Constructor'''
print " Running __init__", testsize,mytestfile
self.testsize=testsize
self.mytestfile = mytestfile
and the workhorse method:
#with_setup(setUp, tearDown)
def test_getFileSize(self):
from nose.tools import ok_, eq_,with_setup
import mp4
with open(self.mytestfile,"rb") as out:
filesize=mp4.getFileSize(out)
eq_(self.testsize,filesize,msg='Passed Test size')
print "Results ", filesize,self.testsize
If I run nosetest against the file containing this class, it correctly tests the class using the default values and the correct setUp and tearDown methods. Problem is that when I write a class to do just that, the setUp method never gets run.
What I want to be able to do is test different file sizes ( i.e. pass a filesize value).
If there is a better way to do it, I am all ears. I would prefer not to do it via the command line if possible.
Thanks
Jim
You could write a test function (not part of a class) where the test function itself is a generator, with each yield returning a new function to run with arguments to generate another test. That would work well if you had 500 different filenames/filesizes as a list you wanted to test against.
See here for a simple example/docs: http://nose.readthedocs.org/en/latest/writing_tests.html#test-generators
With a test class, things get a bit trickier - since it doesn't allow you to use this generator method for class methods. You could use a metaclass to return a class with a suitable number of functions to run your test (one per case, for example.) but that might be beyond what you want to do.
That being said, you might find it sufficient to have a single test method that iterates over a list of filenames/sizes and performs the test on each one. The work there is significantly less, but also results in a single "test" output line for the collective set of tests.
You might reference this question for an answer as to how one person did this:
nose, unittest.TestCase and metaclass: auto-generated test_* methods not discovered
I am trying to design and test code similar to the following in a good object oriented way? (Or a pythonic way?)
Here is a factory class which decides whether a person's name is long or short:
class NameLengthEvaluator(object):
def __init__(self, cutoff=10)
self.cutoff = cutoff
def evaluate(self, name):
if len(self.name) > cutoff:
return 'long'
else:
return 'short'
Here is a person class with an opinion on the length of their own name:
class Person(object):
def __init__(self, name=None, long_name_opinion=8):
self.name = name
def name_length_opinion(self):
return 'My names is ' + \
NameLengthEvaluator(long_name_opinion).evaluate(self.name)
A couple questions:
Does the Person method name_length_opinion() deserve a unit test, and if so what would it look like?
In general, is there a good way to test simple methods of classes with functionality that is entirely external?
It seems like any test for this method would just restate its implementation, and that the test would just exist to confirm that nobody was touching the code.
(disclaimer: code is untested and I am new to python)
Unit Testing
Does the Person method name_length_opinion() deserve a unit test, and if so what would it look like?
Do you want to make sure it does what you think it does and makes sure it doesn't break in the future? If so, write a unit test for it.
and that the test would just exist to confirm that nobody was touching the code
Unit testing is more about making sure a class conforms to the contract that it specifies. You don't have to write a unit test for everything, but if it's a simple method, it should be a simple unit test anyways.
Repetition
It seems like any test for this method would just restate its implementation
You shouldn't be repeating the algorithm, you should be using use cases. For instance, a NameLengthEvaluator with a cutoff of 10 should have these be short names:
George
Mary
and these be long names:
MackTheKnife
JackTheRipper
So you should verify that the method reports the shortness of these names correctly. You should also test that a NameLengthEvaluator with a cutoff of 4 would report Mary as short and the others as long.
Throwaway Code?
If you've ever written a class and then written a main method that just runs the class to make sure it does what it is supposed to (and then you throw that main method away when you move onto another class), you've already written a unit test. But instead of throwing away, save it and convert it to a unit test so that in the future you can make sure you didn't break anything.
External Code
In general, is there a good way to test simple methods of classes with functionality that is entirely external
Well, if it's entirely external then why is it a method on that class? Normally you have at least some logic that can be tested. In this case, you can test that name_length_opinion returns My names is long or My names is short in the correct cases.
It really depends on the lifecycle of that code. It's obvious that, in its current state, the method is obviously correct, and the unit test is more of a specification for how it should behave. If you plan on making changes in the future (reimplementing NameLengthEvaluator somehow differently, for instance), having unit tests is great, because running your tests will catch any regressions. But in this case, it seems unlikely that you'd make any changes, so the tests are probably excessive (though a good sanity check).
Normally you'd use a mock here. You could make a mock NameLengthEvaluator that returned an object that recorded what it was concatenated with, and when name_length_opinion returned, you'd check to make sure that it was used and concatenated with the right thing.
For example, using unittest.mock:
from unittest.mock import MagicMock, patch
#patch('your_module.NameLengthEvaluator', autospec=True)
def test_person_name_length_opinion(NameLengthEvaluator):
expected_result = object()
opinion = MagicMock(name='opinion')
opinion.__radd__.return_value = expected_result
name_length_evaluator = MagicMock(name='name_length_evaluator')
name_length_evaluator.evaluate.return_value = opinion
NameLengthEvaluator.return_value = name_length_evaluator
name = object()
length_limit = object()
person = Person(name, long_name_opinion=length_limit)
result = person.name_length_opinion()
NameLengthEvaluator.assert_called_with(length_limit)
name_length_evaluator.evaluate.assert_called_with(name)
opinion.__radd__.assert_called_with('My names is ')
assert result is expected_result
However, since the method is so simple, I'm not sure you care that much.
I have a method that calls two other methods in it.
def main_method(self, query):
result = self.method_one(query)
count = self.method_two(result)
return count
def method_one(self, query):
#Do some stuff based on results.
#This method hits the database.
return result
def method_two(self, result):
#Do some stuff based on result.
#This method also hits the database.
return count
I'm not very experienced at unit testing and have never worked with Mocks and Stubs.
I'm not too sure how to create a unit test for my first method. Since method_one and method_two hit the database many times and they are very expensive, I have decided to use mox to create a mock or stub in order to eliminate the need of hitting database.
I would really appreciate it if someone who has experience working with Mocks and Stubs give me some hints on using mocks and stubs for my case.
Before worrying about testing main_method(), first test the smaller methods. Consider method_one(). For the purpose of discussion, let's say it exists in a class like this:
class Foo(object):
def method_one(self, query):
# Big nasty query that hits the database really hard!!
return query.all()
In order to test that method without hitting the database, we need an object that knows how to respond to the all() method. For example:
class MockQuery(object):
def all(self):
return [1,2]
Now we can test it:
f = Foo()
q = MockQuery()
assert f.method_one(q) == [1,2]
That's a basic illustration. The real world is often more complicated. In order to be worth the trouble of writing the test, your mock all() would likely do something more interesting than return a constant. Along similar lines, if method_one() contains a bunch of other logic, our MockQuery might need to be more elaborate -- that is, capable of responding appropriately to more methods. Often while trying to test code you realize that your original design was overburdened: you might need to refactor method_one() into smaller, more tightly defined -- and thus more testable -- parts.
Taking the same logic a step up in the hierarchy, you might create a MockFoo class that would know how to respond in simplified ways to method_one() and method_two().
I need to create a unit-test for some python class. I have a database of inputs and expected results which should be generated by the UUT for those inputs.
Here is the pseudo-code of what I want to do:
for i=1 to NUM_TEST_CASES:
Load input for test case i
execute UUT on the input and save output of run
Load expected result for test case i
Compare output of run with the expected result
Can I achieve this using the unittest package or is there some better testing package for this purpose?
The way you describe testing is an odd match for Unit Testing in general. Unit testing does not -- typically -- load test data or rest results from external files. Generally, it's simply hard-coded in the unit test.
That's not to say that your plan won't work. It's just to say that it's atypical.
You have two choices.
(What we do). Write a little script that does the "Load input for test case i", and "Load expected result for test case i". Use this to generate the required unittest code. (We use Jinja2 templates to write Python code from source files.)
Then delete the source files. Yes, delete them. They'll only confuse you.
What you have left is proper Unittest files in the "typical" form with static data for the test case and expected results.
Write your setUp method to do the "Load input for test case i", and "Load expected result for test case i". Write your test method to exercise the UUT.
It might look like this.
class OurTest( unittest.TestCase ):
def setUp( self ):
self.load_data()
self.load_results()
self.uut = ... UUT ...
def runTest( self ):
... exercise UUT with source data ...
... check results, using self.assertXXX methods ...
Want to run this many times? One way it to do something like this.
class Test1( OurTest ):
source_file = 'this'
result_file = 'that'
class Test2( OutTest ):
source_file= 'foo'
result_file= 'bar'
This will allow the unittest main program to find and run your tests.
We do something like this in order to run what are actually integration (regression) tests within the unittest framework (actually an in-house customization thereof which gives us enormous benefits such as running the tests in parallel on a cluster of machines, etc, etc -- the great added value of that customization is why we're so keen to use the unittest framework).
Each test is represented in a file (the parameters to use in that test, followed by the expected results). Our integration_test reads all such files from a directory, parses each of them, and then calls:
def addtestmethod(testcase, uut, testname, parameters, expresults):
def testmethod(self):
results = uut(parameters)
self.assertEqual(expresults, results)
testmethod.__name__ = testname
setattr(testcase, testname, testmethod)
We start with an empty test case class:
class IntegrationTest(unittest.TestCase): pass
and then call addtestmethod(IntegrationTest, ... in a loop in which we're reading all the relevant files and parsing them to get testname, parameters, and expresults.
Finally, we call our in-house specialized test runner which does the heavy lifting (distributing the tests over available machines in a cluster, collecting results, etc). We didn't want to reinvent that rich-value-added wheel, so we're making a test case as close to a typical "hand-coded" one as needed to "fool" the test runner into working right for us;-).
Unless you have specific reasons (good test runners or the like) to use unittest's approach for your (integration?) tests, you may find your life is simpler with a different approach. However, this one is quite viable and we're quite happy with its results (which mostly include blazingly-fast runs of large suites of integration/regression tests!-).
To me it seems like pytest has just the thing you need.
You can parametrise tests so that the same tests is run for as many times as you have inputs and all it takes is a decorator (no loops etc.).
Here's a plain example:
import pytest
#pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 42),
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
Here parametrise takes two arguments - the names of the parameters as a string, and the values of those parameters as an iterable.
test_eval will then be called once for each element of list.
Maybe you could use doctest for this. Knowing your inputs and outputs (and being able to map the case number to a function name) you should be able to produce a text file like this:
>>> from XXX import function_name1
>>> function_name1(input1)
output1
>>> from XXX import function_name2
>>> function_name2(input2)
output2
...
And then just use doctest.testfile('cases.txt'). It could be worth trying.
You might also want to take a look at my answer to this question. Again I'm trying to do regression testing rather than unit testing per-se but the unittest framework good for both.
In my case, I had about a dozen input files, covering a fair spread of different use cases, and I had about half a dozen test functions I wanted to call on each.
Instead of writing 72 different tests most of which were identical apart from the input parameters and results data, I created a dictionary of results (with the key being the input parameters and the value being a dictionary of results for each function under test). I then wrote a single TestCase class to test each of the 6 functions and replicated that over the 12 test files by adding teh TestCase to the test suite multiple times.