I'm having some difficulty understanding how I would go about changing a unittest report similar to:
======================================================================
FAIL: test_equal (__main__.InequalityTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_notequal.py", line 7, in test_equal
self.assertNotEqual(1, 3-2, "My Custom Message")
AssertionError: 1 == 1
to a report resembling:
Line 7: My Custom Message
How could I parse these reports?
After further research, my problem can be solved by overriding the default TestResult class as seen here: Turn some print off in python unittest
or by using some third-party customization such as nose, an HTMLTestRunner.
In case you need to create a custom report based on the success/failure of individual test cases, you can create a custom TestRunner, which uses custom TestResult, and in TestResult class, override success and failure methods. This will provide the callback for processing as per requirement.
class CustomTestRunner(TextTestRunner):
def _makeResult(self):
return CustomTestResult(TestResult)
def run(self, test) -> unittest.result.TestResult:
# add implementation as per TextTestRunner run method here
class CustomTestResult:
def addSuccess(self, test):
super(CustomTestResult, self).addSuccess(test)
# your logic to log success cases
def addFailure(self, test):
super(CustomTestResult, self).addFailure(test)
# your logic to log failure cases
For reference, please check python unittest module.
Related
I'm writing integration tests for an Alexa app.
Our application uses a controller-request-response pattern. The controller receives a request with a specified intent and session variables, routes the request to functions that do some computation with the session variables, and returns a response object with the results of that computation.
We get the right behavior from UnhandledIntentTestCase as far as test_for_smoke is concerned. However, test_returning_reprompt_text
never fires, because returns_reprompt_text is never overwritten.
Can someone explain how I can overwrite it in the parent class and/or
how the correct intent name is passed to the request object in setUpClass?
intent_base_case.py
import unittest
import mycity.intents.intent_constants as intent_constants
import mycity.mycity_controller as mcc
import mycity.mycity_request_data_model as req
import mycity.test.test_constants as test_constants
###############################################################################
# TestCase parent class for all intent TestCases, which are integration tests #
# to see if any changes in codebase have broken response-request model. #
# #
# NOTE: Assumes that address has already been set. #
###############################################################################
class IntentBaseCase(unittest.TestCase):
__test__ = False
intent_to_test = None
returns_reprompt_text = False
#classmethod
def setUpClass(cls):
cls.controller = mcc.MyCityController()
cls.request = req.MyCityRequestDataModel()
key = intent_constants.CURRENT_ADDRESS_KEY
cls.request._session_attributes[key] = "46 Everdean St"
cls.request.intent_name = cls.intent_to_test
cls.response = cls.controller.on_intent(cls.request)
#classmethod
def tearDownClass(cls):
cls.controller = None
cls.request = None
def test_for_smoke(self):
self.assertNotIn("Uh oh", self.response.output_speech)
self.assertNotIn("Error", self.response.output_speech)
def test_correct_intent_card_title(self):
self.assertEqual(self.intent_to_test, self.response.card_title)
#unittest.skipIf(not returns_reprompt_text,
"{} shouldn't return a reprompt text".format(intent_to_test))
def test_returning_reprompt_text(self):
self.assertIsNotNone(self.response.reprompt_text)
#unittest.skipIf(returns_reprompt_text,
"{} should return a reprompt text".format(intent_to_test))
def test_returning_no_reprompt_text(self):
self.assertIsNone(self.response.reprompt_text)
test_unhandled_intent.py
import mycity.test.intent_base_case as base_case
########################################
# TestCase class for unhandled intents #
########################################
class UnhandledIntentTestCase(base_case.IntentBaseCase):
__test__ = True
intent_to_test = "UnhandledIntent"
returns_reprompt_text = True
output
======================================================================
FAIL: test_correct_intent_card_title (mycity.test.test_unhandled_intent.UnhandledIntentTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/wdrew/projects/alexa_311/my_city/mycity/mycity/test/intent_base_case.py", line 44, in test_correct_intent_card_title
self.assertEqual(self.intent_to_test, self.response.card_title)
AssertionError: 'UnhandledIntent' != 'Unhandled intent'
- UnhandledIntent
? ^
+ Unhandled intent
? ^^
======================================================================
FAIL: test_returning_no_reprompt_text (mycity.test.test_unhandled_intent.UnhandledIntentTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/wdrew/projects/alexa_311/my_city/mycity/mycity/test/intent_base_case.py", line 56, in test_returning_no_reprompt_text
self.assertIsNone(self.response.reprompt_text)
AssertionError: 'So, what can I help you with today?' is not None
----------------------------------------------------------------------
This is because of execution order. The SkipIf decorators are executed once during the parsing of the IntentBaseCase class. They aren't re-executed for each class or for each call to the test function.
The decorator pattern for SkipIf is designed for use with fixed global variables such as versions of dependent modules, operating system or some other external resource who's availability can be calculated or known in the global context.
Skipping tests is also something that should be done for external reasons, not for internal ones such as the needs of a sub-class. A skip is still a kind of failing test which is indicated in the report so you can see your test suite isn't exercising the whole of the functional scope of the project.
You should redesign your base class structure so functions are only available to run if the sub-class and skip using Skip for this. My recommendation would be:
class IntentBaseCase(unittest.TestCase):
...
class RepromptBaseCase(IntentBaseCase):
def test_returning_reprompt_text(self):
self.assertIsNotNone(self.response.reprompt_text)
class NoRepromptBaseCase(IntentBaseCase):
def test_returning_no_reprompt_text(self):
self.assertIsNone(self.response.reprompt_text)
You should also consider moving the response portion out of the setUp and put it into a test_ function of it's own and change these test_returning functions into a simpler assertReprompt and assertNoReprompt functions. It's a good idea to set up the tests in setUp, but not a good idea to run the actual code there.
I'm currently running my tests like this:
tests = unittest.TestLoader().discover('tests')
unittest.TextTestRunner().run(tests)
Now I want to run a specific test knowing his name (like test_valid_user) but not knowing his class. If there is more than one test with such name than I would like to run all such tests. Is there any way to filter tests after discover?
Or maybe there are other solutions to this problem (please note that it shouldn't be done from command line)?
You can use the unittest.loader.TestLoader.testMethodPrefix instance variable to change the test methods filter according to a different prefix than "test".
Say you have a tests directory with this king of unit tests:
import unittest
class MyTest(unittest.TestCase):
def test_suite_1(self):
self.assertFalse("test_suite_1")
def test_suite_2(self):
self.assertFalse("test_suite_2")
def test_other(self):
self.assertFalse("test_other")
You can write your own discover function to discover only test functions starting with "test_suite_", for instance:
import unittest
def run_suite():
loader = unittest.TestLoader()
loader.testMethodPrefix = "test_suite_"
suite = loader.discover("tests")
result = unittest.TestResult()
suite.run(result)
for test, info in result.failures:
print(info)
if __name__ == '__main__':
run_suite()
remark: the argument "tests" in the discover method is a directory path, so you may need to write a fullpath.
As a result, you'll get:
Traceback (most recent call last):
File "/path/to/tests/test_my_module.py", line 8, in test_suite_1
self.assertFalse("test_suite_1")
AssertionError: 'test_suite_1' is not false
Traceback (most recent call last):
File "/path/to/tests/test_my_module.py", line 11, in test_suite_2
self.assertFalse("test_suite_2")
AssertionError: 'test_suite_2' is not false
Another simpler way, would be to use py.test with the -k option which does a test name keyword scan. It will run any tests whose name matches the keyword expression.
Although that is using the command-line which you didn't want, please not that you can call the command-line from your code using subprocess.call to pass any arguments you want dynamically.
E.g.: Assuming you have the following tests:
def test_user_gets_saved(self): pass
def test_user_gets_deleted(self): pass
def test_user_can_cancel(self): pass
You can call py.test from cli:
$ py.test -k "test_user"
Or from code:
return_code = subprocess.call('py.test -k "test_user"', shell=True)
There are two ways to run a single test method:
Command line:
$ python -m unittest test_module.TestClass.test_method
Using Python script:
import unittest
class TestMyCode(unittest.TestCase):
def setUp(self):
pass
def test_1(self):
self.assertTrue(True)
def test_2(self):
self.assertTrue(True)
if __name__ == '__main__':
testSuite = unittest.TestSuite()
testSuite.addTest(TestMyCode('test_1'))
runner=unittest.TextTestRunner()
runner.run(testSuite)
Output:
------------------------------------------------------------
Ran 1 test in 0.000s
OK
My Python version is 3.5.1
I have a simple code (tests.py):
import unittest
class SimpleObject(object):
array = []
class SimpleTestCase(unittest.TestCase):
def test_first(self):
simple_object = SimpleObject()
simple_object.array.append(1)
self.assertEqual(len(simple_object.array), 1)
def test_second(self):
simple_object = SimpleObject()
simple_object.array.append(1)
self.assertEqual(len(simple_object.array), 1)
if __name__ == '__main__':
unittest.main()
If I run it with command 'python tests.py' I will get the results:
.F
======================================================================
FAIL: test_second (__main__.SimpleTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 105, in test_second
self.assertEqual(len(simple_object.array), 1)
AssertionError: 2 != 1
----------------------------------------------------------------------
Ran 2 tests in 0.003s
FAILED (failures=1)
Why it is happening? And how to fix it. I expect that each tests run will be independent (each test should pass), but it is not as we can see.
The array is shared by all instances of the class. If you want the array to be unique to an instance you need to put it in the class initializer:
class SimpleObject(object):
def __init__(self):
self.array = []
For more information take a look at this question: class variables is shared across all instances in python?
This can also be accomplished directly in unittest if you prefer to use only 1 class. Implement the setUp class. setUp runs before any of the tests are run as the class is instantiated. It is similar to init but conforms to the unittest library. Note the opposite is tearDown which is executed at the end of the test class if you need to build and deprecate objects. Example:
class SimpleObject(unitest.TestCase):
def setUp(self):
# I run first
self.array = []
def some_test(self):
self.assertTrue(self.array == [])
<some test code here>
def tearDown(self):
# I run last
self.array = []. # or whatever teardown code you need
enjoy!
I am trying to use unittest to automate a test case. However, when the test passes or fails, it would write the result to the console. Is there any way I can have unittest return some test result status code? This is because I would like to add another function in my test script to record the test result into our database. What is the best way to assess the test passed or failed programmatically?
This depends on how much information you need about the test results. If you just want to know if the test passed/failed using unittest.main :
"By default main calls sys.exit() with an exit code indicating success
or failure of the tests run"
So checking the return value (0=passed, non-0=failed) of your test script is enough to get a passed/failed answer.
If you need more details about the tests, you can skip the unittest.main() call and call the TestRunner.run method directly which returns a TestResult object describing the results. An example:
import unittest
from unittest import TextTestRunner
class TestExample(unittest.TestCase):
def test_pass(self):
self.assertEqual(1, 1, 'Expected 1 to equal 1')
def test_fail(self):
self.assertEqual(1, 2, 'uh-oh')
if __name__ == '__main__':
test_suite = unittest.TestLoader().loadTestsFromTestCase(TestExample)
test_result = TestRunner().run(test_suite)
... and you can now inspect the test_result variable to get more details about the test run:
>>> test_result.testsRun
2
>>> test_result.failures
[(<test_example.TestExample testMethod=test_fail>, 'Traceback (most recent call last):\n File "test_example.py", line 9, in test_fail\n self.assertEqual(1, 2, \'uh-oh\')\nAssertionError: uh-oh\n')]
>>> len(test_result.failures)
1
The properties of TestResult are documented here, and examples & options for running the test runner is explained here.
Here is my LoginResourceHelper Test class
from flask.ext.testing import TestCase
class LoginResourceHelper(TestCase):
content_type = 'application/x-www-form-urlencoded'
def test_create_and_login_user(self, email, password):
user = UserHelper.add_user(email, password)
self.assertIsNotNone(user)
response = self.client.post('/', content_type=self.content_type,
data=UserResourceHelper.get_user_json(
email, password))
self.assert200(response)
# HTTP 200 OK means the client is authenticated and cookie
# USER_TOKEN has been set
return user
def create_and_login_user(email, password='password'):
"""
Helper method, also to abstract the way create and login works.
Benefit? The guts can be changed in future without breaking the clients
that use this method
"""
return LoginResourceHelper().test_create_and_login_user(email, password)
When I call create_and_login_user('test_get_user'), I see error as following
line 29, in create_and_login_user
return LoginResourceHelper().test_create_and_login_user(email, password)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py", line 191, in __init__
(self.__class__, methodName))
ValueError: no such test method in <class 'core.expense.tests.harness.LoginResourceHelper.LoginResourceHelper'>: runTest
Python's unittest module (which Flask is using behind the scenes) organizes the code in a special way.
In order to run a specific method from a class that is derived of TestCase you need to do the following:
LoginResourceHelper('test_create_and_login_user').test_create_and_login_user(email, password)
What untitest does behind the scenes
In order to understand why you must do this, you need to understand how the default TestCase object works.
Normally, when inherited, TestCase is expecting to have a runTest method:
class ExampleTestCase(TestCase):
def runTest(self):
# Do assertions here
However if you needed to have multiple TestCases you would need to do this for every single one.
Since this is a tedious thing to do, they decided to the following:
class ExampleTestcase(TestCase):
def test_foo(self):
# Do assertions here
def test_bar(self):
# Do other assertions here
This is called a Test Fixture. But since we did not declare a runTest(), you now must specify what method you want the TestCase to run - which is what you want to do.
>>ExampleTestCase('test_foo').test_foo()
>>ExampleTestCase('test_bar').test_bar()
Normally, the unittest module will do all of this on the back end, along with some other things:
Adding TestCases to a Test Suite (which is normally done by using a TestLoader)
Calling the correct TestRunner (which will run all of the tests and report the results)
But since you are circumventing the normal unittest execution, you have to do the work that unitest regularly does.
For a really indepth understanding, I highly recommend that you read the docs unittest.