How to skip some test cases in test discovery? - python

In python 2.7, I use unittest module and write tests, while some of them are skipped with #unittest.skip.
My codes looks like:
import unittest
class MyTest(unittest.TestCase):
def test_1(self):
...
#unittest.skip
def test_2(self):
...
I have lots of such test files in a folder, and I use test discovery to run all these test files:
/%python_path/python -m unittest discover -s /%my_ut_folder% -p "*_unit_test.py"
This way, all *_unit_test.py files in the folder will be ran. In above codes, both test_1 and test_2 will be ran. What I want is, all test cases with #unittest.skip, e.g. test_2 in my above codes, should be skipped. How do I achieve this?
Any help or suggestion will be greatly appreciated!

Try adding a string argument to the #unittest.skip decorator, such as in the following:
import unittest
class TestThings(unittest.TestCase):
def test_1(self):
self.assertEqual(1,1)
#unittest.skip('skipping...')
def test_2(self):
self.assertEqual(2,4)
Running without the string argument in python 2.7 gives me the following:
.E
======================================================================
ERROR: test_2 (test_test.TestThings)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib64/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: 'TestThings' object has no attribute '__name__'
----------------------------------------------------------------------
Ran 2 tests in 0.001s
whereas running with text in python 2.7 gives me:
.s
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK (skipped=1)
See https://docs.python.org/3/library/unittest.html or https://www.tutorialspoint.com/unittest_framework/unittest_framework_skip_test.htm for more details

Related

Creating a Unit testcase Class within a function not running testcases

I need to create a unit test class with a function so i can call the function when some event is triggered. I am using the below method but testcases are not executing
gp = r"somefile"
def MyFunc():
if os.path.exists(gp):
print("yes")
class First__Test_Cases(unittest.TestCase):
def test_001(self):
print("1")
def test__002(self):
print("2")
if __name__ == '__main__':
unittest.main()
First__Test_Cases()
else:
print("fail")
MyFunc()
output - Ran 0 tests in 0.000s
Remove MyFunc() and global parts, it should only contain class and main
mytestfile.py
import unittest
class First_Test_Cases(unittest.TestCase):
def test_001(self):
Pass
def test__002(self):
Pass
if __name__ == '__main__':
unittest.main()
Then run
python mytestfile.py
And all tests in the class will be executed:
...
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
You can read more and see more examples in the documentation
If you need to have a function call the test, you should do that in a separate file. Check this post: Run unittests from a different file
According to what I understood from your change in code, this is your use case:
Tests should be run only if a certain file exists. Otherwise, they should be skipped.
For this use case, I would suggest the following solution:
import os
import unittest
gp = "some_file.txt"
msg = "file {} does not exist".format(gp)
class First_Test_Cases(unittest.TestCase):
#unittest.skipUnless(os.path.exists(gp), msg)
def test_001(self):
pass
#unittest.skipUnless(os.path.exists(gp), msg)
def test_002(self):
pass
if __name__ == '__main__':
unittest.main()
The output would be the following if the file does not exist:
ss
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK (skipped=2)
and this one, if it exists:
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
In case if you want your tests to fail, just change the code in this way:
import os
import unittest
gp = "some_file.txt"
msg = "file {} does not exist".format(gp)
class First_Test_Cases(unittest.TestCase):
def test_001(self):
self.assertTrue(os.path.exists(gp), msg) # add this line
# your code
def test_002(self):
self.assertTrue(os.path.exists(gp), msg) # add this line
# your code
Then, the output would be the following:
FF
======================================================================
FAIL: test_001 (__main__.First_Test_Cases)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test.py", line 11, in test_001
self.assertTrue(os.path.exists(gp), msg)
AssertionError: file some_file.txt does not exist
======================================================================
FAIL: test_002 (__main__.First_Test_Cases)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test.py", line 15, in test_002
self.assertTrue(os.path.exists(gp), msg)
AssertionError: file some_file.txt does not exist
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (failures=2)

Python unittest expected failures ignored in Python3

I would like to upgrade my python test harness - which is based on Python's unittest module - from Python2 to Python3. However, the unittest.expectedFailure decorator doesn't seem to have the same effect anymore. In particular, the following code has different behavior depending on the Python version even though the specifications are virtually identical:
#!/usr/bin/env python2
#!/usr/bin/env python3
# Switch between the two lines above to get the different outcome
import unittest
class ComparisonTests(unittest.TestCase):
def runTest(self):
""" This method is needed even if empty """
def add_test(self, the_suite):
def testMain():
self.testFunc()
testMain = unittest.expectedFailure(testMain)
the_case = unittest.FunctionTestCase(testMain)
the_suite.addTest(the_case)
def testFunc(self):
self.assertTrue(False)
if __name__ == '__main__':
SUITE = unittest.TestSuite()
ComparisonTests().add_test(SUITE)
the_runner = unittest.TextTestRunner(verbosity=2)
the_runner.run(SUITE)
If I keep the first line (#!/usr/bin/env python2) and run on MacOS 10.14.1 and Python 2.7.15 then the output is the following:
unittest.case.FunctionTestCase (testMain) ... expected failure
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (expected failures=1)
This is the behavior I expect. However, if I switch to the second line (#!/usr/bin/env python3) which will use Python 3.7.3 I get the following:
unittest.case.FunctionTestCase (testMain) ... FAIL
======================================================================
FAIL: unittest.case.FunctionTestCase (testMain)
----------------------------------------------------------------------
Traceback (most recent call last):
File "./unittest_test_2.py", line 12, in testMain
self.testFunc()
File "./unittest_test_2.py", line 18, in testFunc
self.assertTrue(False)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
It looks like the unittest.expectedFailure decorator was ignored. Looking at the source code I can see a clear difference:
# Python 3.7 source:
def expectedFailure(test_item):
test_item.__unittest_expecting_failure__ = True
return test_item
# Python 2.7 source:
def expectedFailure(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception:
raise _ExpectedFailure(sys.exc_info())
raise _UnexpectedSuccess
return wrapper
How can I define expected failures in the Python3 version of unittest ?
The Python 3 version of the unittest.expectedFailure decorator is expected to operate on a unittest test case and not on a method as it did in Python 2. So in order for the above test harness to work with Python 3 one needs to use the expectedFalure decorator on the_case as follows:
the_case = unittest.FunctionTestCase(testMain)
the_case = unittest.expectedFailure(the_case)

Unittest example not working

I'm just learning about python and unittests in particular.
I'm trying to follow the following simple example where the function being tested is:
def get_formatted_name(first, last):
"""Generate a neatly formatted name"""
full_name = first + ' ' + last
return full_name.title()
and the test code is:
import unittest
from name_function import get_formatted_name
class NamesTestCase(unittest.TestCase):
"""Tests for 'name_function.py'"""
def test_first_last_name(self):
"""Do names liike 'Janis Joplin' work """
formatted_name = get_formatted_name('janis', 'joplin')
self.assertEqual(formatted_name, 'Janis Joplin')
unittest.main()
According to the example this should run fine and report that the test ran successfully.
However I get the following errors:
EE
======================================================================
ERROR: test_name_function (unittest.loader._FailedTest)
----------------------------------------------------------------------
AttributeError: module '__main__' has no attribute 'test_name_function'
======================================================================
ERROR: true (unittest.loader._FailedTest)
----------------------------------------------------------------------
AttributeError: module '__main__' has no attribute 'true'
----------------------------------------------------------------------
Ran 2 tests in 0.000s
FAILED (errors=2)
Process finished with exit code 1
Unfortunately I have no idea what is going wrong!
As per the documentation you would need to add the following code. That way it'll run as the main module rather than anything else. You can see the example here.
if __name__ == '__main__':
unittest.main()

Filter tests after discover

I'm currently running my tests like this:
tests = unittest.TestLoader().discover('tests')
unittest.TextTestRunner().run(tests)
Now I want to run a specific test knowing his name (like test_valid_user) but not knowing his class. If there is more than one test with such name than I would like to run all such tests. Is there any way to filter tests after discover?
Or maybe there are other solutions to this problem (please note that it shouldn't be done from command line)?
You can use the unittest.loader.TestLoader.testMethodPrefix instance variable to change the test methods filter according to a different prefix than "test".
Say you have a tests directory with this king of unit tests:
import unittest
class MyTest(unittest.TestCase):
def test_suite_1(self):
self.assertFalse("test_suite_1")
def test_suite_2(self):
self.assertFalse("test_suite_2")
def test_other(self):
self.assertFalse("test_other")
You can write your own discover function to discover only test functions starting with "test_suite_", for instance:
import unittest
def run_suite():
loader = unittest.TestLoader()
loader.testMethodPrefix = "test_suite_"
suite = loader.discover("tests")
result = unittest.TestResult()
suite.run(result)
for test, info in result.failures:
print(info)
if __name__ == '__main__':
run_suite()
remark: the argument "tests" in the discover method is a directory path, so you may need to write a fullpath.
As a result, you'll get:
Traceback (most recent call last):
File "/path/to/tests/test_my_module.py", line 8, in test_suite_1
self.assertFalse("test_suite_1")
AssertionError: 'test_suite_1' is not false
Traceback (most recent call last):
File "/path/to/tests/test_my_module.py", line 11, in test_suite_2
self.assertFalse("test_suite_2")
AssertionError: 'test_suite_2' is not false
Another simpler way, would be to use py.test with the -k option which does a test name keyword scan. It will run any tests whose name matches the keyword expression.
Although that is using the command-line which you didn't want, please not that you can call the command-line from your code using subprocess.call to pass any arguments you want dynamically.
E.g.: Assuming you have the following tests:
def test_user_gets_saved(self): pass
def test_user_gets_deleted(self): pass
def test_user_can_cancel(self): pass
You can call py.test from cli:
$ py.test -k "test_user"
Or from code:
return_code = subprocess.call('py.test -k "test_user"', shell=True)
There are two ways to run a single test method:
Command line:
$ python -m unittest test_module.TestClass.test_method
Using Python script:
import unittest
class TestMyCode(unittest.TestCase):
def setUp(self):
pass
def test_1(self):
self.assertTrue(True)
def test_2(self):
self.assertTrue(True)
if __name__ == '__main__':
testSuite = unittest.TestSuite()
testSuite.addTest(TestMyCode('test_1'))
runner=unittest.TextTestRunner()
runner.run(testSuite)
Output:
------------------------------------------------------------
Ran 1 test in 0.000s
OK

How to skip nosetests of class decorated with nose.plugins.attrib.attr in shell

class decorator for skipping nosetests can be written like below:
from nose.plugins.attrib import attr
#attr(speed='slow')
class MyTestCase:
def test_long_integration(self):
pass
def test_end_to_end_something(self):
pass
As per documentation, "In Python 2.6 and higher, #attr can be used on a class to set attributes on all its test methods at once"
I couldn't find a way to test the code. Running
nosetests -a speed=slow
didn't help. Any help will be appreciated. Thanks in advance :)
You are missing unittest.TestCase parent class for your test, i.e.:
from unittest import TestCase
from nose.plugins.attrib import attr
#attr(speed='slow')
class MyTestCase(TestCase):
def test_long_integration(self):
pass
def test_end_to_end_something(self):
pass
class MyOtherTestCase(TestCase):
def test_super_long_integration(self):
pass
Your command should select tests based on attributes, not skip them:
$ nosetests ss_test.py -a speed=slow -v
test_end_to_end_something (ss_test.MyTestCase) ... ok
test_long_integration (ss_test.MyTestCase) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.004s
OK
If you want to do fancy test selection, you can use "-A" attribute and use full python syntax:
$ nosetests ss_test.py -A "speed=='slow'" -v
test_end_to_end_something (ss_test.MyTestCase) ... ok
test_long_integration (ss_test.MyTestCase) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.003s
OK
here is how to skip slow tests:
$ nosetests ss_test.py -A "speed!='slow'" -v
test_super_long_integration (ss_test.MyOtherTestCase) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.003s
OK

Categories