#unittest.skip(reason) can't work on main function - python

I'd like to prevent/skip some tests cases during runinng others in python. I couldn't achive to use #unittest.skip(reason) on my case. It always generates a Script Error in python unittest.
My code;
import unittest
#unittest.skip("something")
def main():
try:
something = []
for _ in range(4):
test.log("something happened")
The result is;
Error Script Error
Detail: SkipTest: something
Do you have any idea about the issue?

Unittest is a module that act as testing framework. You should use it according to the framework practices (taken from tutorialspoint.com):
import unittest
def add(x,y):
return x+y
class SimpleTest(unittest.TestCase):
#unittest.skip("demonstrating skipping")
def testadd1(self):
self.assertEquals(add(4,5),9)
if __name__ == '__main__':
unittest.main()

Squish test scripts written in Python are unrelated to the unittest module. I cannot tell how the two could be used together.
Squish offers the test.skip() function for skipping test cases:
test.skip(message);
test.skip(message, detail);
This function skips further execution of the current script test case, or the current BDD scenario (or BDD scenario outline iteration), by throwing an exception and adding a SKIPPED entry to Squish's test log with the given message string, and with the given detail, if provided. If the test case logged verification failures before the skip, it is still considered failed.

Related

Run pytest test in main while using fixture

I am currently using pytest to implement tests for hardware which will perform communication among other things. I am also using the record_property fixture to record some values. The test is defined like this:
def test_comm_ytcb_to_ytpb(record_property):
To debug this test I would like to run it like follows from within PyCharm:
if __name__ == '__main__':
test_comm_ytcb_to_ytpb()
Yet, Python will yield the following exception in the console:
TypeError: test_comm_ytcb_to_ytpb() missing 1 required positional argument: 'record_property'
Of course, the error is justified but what do I need so I can continue debugging just like I did without the record_property fixture. I don't mind if nothing is logged or written but I would like to continue debugging the way I did
EDIT:
I just found out that this seems to work:
if __name__ == '__main__':
def record_property(a, b):
return
test_comm_ytcb_to_ytpb(record_property)
Is that ... acceptable?

Count subtests in Python unittests separately

Since version 3.4, Python supports a simple subtest syntax when writing unittests. A simple example could look like this:
import unittest
class NumbersTest(unittest.TestCase):
def test_successful(self):
"""A test with subtests that will all succeed."""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i, i)
if __name__ == '__main__':
unittest.main()
When running the tests, the output will be
python3 test_foo.py --verbose
test_successful (__main__.NumbersTest)
A test with subtests that will all succeed. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
However, in my real world use cases, the subtests will depend on a more complex iterable and check something which is very different for each subtest. Consequently, I would rather have each subtest counted and listed as a separated test case in the output (Ran 6 tests in ... in this example) to get the full picture.
Is this somehow possible with the plain unittest module in Python? The nose test generator feature would output each test separately but I would like to stay compatible with the standard library if possible.
You could subclass unittest.TestResult:
class NumbersTestResult(unittest.TestResult):
def addSubTest(self, test, subtest, outcome):
# handle failures calling base class
super(NumbersTestResult, self).addSubTest(test, subtest, outcome)
# add to total number of tests run
self.testsRun += 1
Then in NumbersTest override the run function:
def run(self, test_result=None):
return super(NumbersTest, self).run(NumbersTestResult())
Sorry I cannot test this in a fully working environment right now, but this should do the trick.
Using python 3.5.2, themiurge's answer didn't work out-of-the-box for me but a little tweaking got it to do what I wanted.
I had to specifically get the test runner to use this new class as follows:
if __name__ == '__main__':
unittest.main(testRunner=unittest.TextTestRunner(resultclass=NumbersTestResult))
However this didn't print the details of the test failures to the console as in the default case. To restore this behaviour I had to change the class NumbersTestResult inherited from to unittest.TextTestResult.
class NumbersTestResult(unittest.TextTestResult):
def addSubTest(self, test, subtest, outcome):
# handle failures calling base class
super(NumbersTestResult, self).addSubTest(test, subtest, outcome)
# add to total number of tests run
self.testsRun += 1

What is the correct way to report an error in a Python unittest in the setUp method?

I've read some conflicting advice on the use of assert in the setUp method of a Python unit test. I can't see the harm in failing a test if a precondition that test relies on fails.
For example:
import unittest
class MyProcessor():
"""
This is the class under test
"""
def __init__(self):
pass
def ProcessData(self, content):
return ['some','processed','data','from','content'] # Imagine this could actually pass
class Test_test2(unittest.TestCase):
def LoadContentFromTestFile(self):
return None # Imagine this is actually doing something that could pass.
def setUp(self):
self.content = self.LoadContentFromTestFile()
self.assertIsNotNone(self.content, "Failed to load test data")
self.processor = MyProcessor()
def test_ProcessData(self):
results = self.processor.ProcessData(self.content)
self.assertGreater(results, 0, "No results returned")
if __name__ == '__main__':
unittest.main()
This seems like a reasonable thing to do to me i.e. make sure the test is able to run. When this fails because of the setup condition we get:
F
======================================================================
FAIL: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Projects\Experiments\test2.py", line 21, in setUp
self.assertIsNotNone(self.content, "Failed to load test data")
AssertionError: unexpectedly None : Failed to load test data
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase.
In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run.
Based on the above paragraphs you should not assert anything in your setUp method.
So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...)
Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests.
Back to your example; There is a pattern to structure the tests where
you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module.
The following pattern decrease the possibility for failure, since you have less calls to the external resource:
class TestClass(unittest.TestCase):
def setUpClass(self):
# since external resources such as other servers can provide a bad content
# you can verify that the content is valid
# then prevent from the tests to run
# however, in most cases you shouldn't.
self.externalResourceContent = loadContentFromExternalResource()
def setUp(self):
self.content = self.copyContentForTest()
Pros:
less chances to failure
prevent inconsistency behavior (1. something/one has edited the external resource. 2. you failed to load the external resource in some of your tests)
faster execution
Cons:
the code is more complex
setUp is not for asserting preconditions but creating them. If your test is unable to create the necessary fixture, it is broken, not failing.
From the Python Standard Library Documentation:
"If the setUp() method raises an exception while the test is running,
the framework will consider the test to have suffered an error, and
the runTest() method will not be executed. If setUp() succeeded, the
tearDown() method will be run whether runTest() succeeded or not. Such
a working environment for the testing code is called a fixture."
An assertion exception in the setUp() method would be considered as an error by the unittest framework. The test will not be executed.
There isn't a right or wrong answer here , it depends on what you are testing and how expensive setting up your tests is. Some tests are too dangerous to allow attempted runs if the data isn't as expected, some need to work with that data.
You can use assertions in setUp if you need to check between tests for particular conditions, this can help reduce repeated code in your tests.
However also makes moving test methods between classes or files a bit trickier as they will be reliant on having the equivalent setUp. It can also push the limits of complexity for less code savvy testers.
It a bit cleaner to have a test that checks these startup conditions individually and run it first , they might not be needed in between each test. If you define it as test_01_check_preconditions it will be done before any of the other test methods , even if the rest are random.
You can also then use unittest2.skip decorators for certain conditions.
A better approach is to use addCleanup to ensure that state is reset, the advantage here is that even if the test fails it still gets run, you can also make the cleanup more aware of the specific situation as you define it in the context of your test method.
There is also nothing to stop you defining methods to do common checks in the unittest class and calling them in setUp or in test_methods, this can help keep complexity inclosed in defined and managed areas.
Also don't be tempted to subclass unittest2 beyond a simple test definition, i've seen people try to do that to make tests simple and actually introduce totally unexpected behaviour.
I guess the real take home is , if you do it know why you want to use it and ensure you document your reasons it its probably ok , if you are unsure then go for the simplest easiest to understand option because tests are useless if they are not easy to understand.
There is one reason why you want to avoid assertions in a setUp().
If setUp fails, your tearDown will not be executed.
If you setup a set of database records for instance and your teardown deletes these records, then these records will not be deleted.
With this snippet:
import unittest
class Test_test2(unittest.TestCase):
def setUp(self):
print 'setup'
assert False
def test_ProcessData(self):
print 'testing'
def tearDown(self):
print 'teardown'
if __name__ == '__main__':
unittest.main()
You run only the setUp():
$ python t.py
setup
E
======================================================================
ERROR: test_ProcessData (__main__.Test_test2)
----------------------------------------------------------------------
Traceback (most recent call last):
File "t.py", line 7, in setUp
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)

Python unittest report passed test

Hello I have a test module like the following under "test.py":
class TestBasic(unittest.TestCase):
def setUp(self):
# set up in here
class TestA(TestBasic):
def test_one(self):
self.assertEqual(1,1)
def test_two(self):
self.assertEqual(2,1)
if __name__ == "__main__":
unittest.main()
And this works pretty good, but I need a way to print which test passed, for example I could print the output to the console:
test_one: PASSED
test_two: FAILED
Now the twist, I could add a print statement right after the self.assertEqual() and that would be a passed test and I could just print it, but I need to run the test from a different module, let's say "test_reporter.py" where I have something like this:
import test
suite = unittest.TestLoader().loadTestsFromModule(test)
results = unittest.TextTestRunner(verbosity=0).run(suite)
at this point with results is when I build a report.
So please any suggestion is welcome
Thanks !!
Like Corey's comment mentioned, if you set verbosity=2 unittest will print the result of each test run.
results = unittest.TextTestRunner(verbosity=2).run(suite)
If you want a little more flexibility - and you might since you are creating suites and using test runners - I recommend that you take a look at Twisted Trial. It extends Python's unittest module and provides a few more assertions and reporting features.
Writing your tests will be exactly the same (besides subclassing twisted.trial.unittest.TestCase vs python's unittest) so your workflow won't change. You can still use your TestLoader but you'll have the options of many more TestReporters http://twistedmatrix.com/documents/11.1.0/api/twisted.trial.reporter.html.
For example, the default TestReporter is TreeReporter which returns the following output:

how to generate unit test code for methods

i want to write code for unit test to test my application code. I have different methods and now want to test these methods one by one in python script.
but i do not how to i write. can any one give me example of small code for unit testing in python.
i am thankful
Read the unit testing framework section of the Python Library Reference.
A basic example from the documentation:
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
def testshuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
def testchoice(self):
element = random.choice(self.seq)
self.assert_(element in self.seq)
def testsample(self):
self.assertRaises(ValueError, random.sample, self.seq, 20)
for element in random.sample(self.seq, 5):
self.assert_(element in self.seq)
if __name__ == '__main__':
unittest.main()
It's probably best to start off with the given unittest example. Some standard best practices:
put all your tests in a tests folder at the root of your project.
write one test module for each python module you're testing.
test modules should start with the word test.
test methods should start with the word test.
When you've become comfortable with unittest (and it shouldn't take long), there are some nice extensions to it that will make life easier as your tests grow in number and scope:
nose -- easily find and run all your tests, and more.
testoob -- colorized output (and more, but that's why I use it).
pythoscope -- haven't tried it, but this will automatically generate (failing) test stubs for your application. Should save a lot of time writing boilerplate code.
Here's an example and you might want to read a little more on pythons unit testing.

Categories