How to stop Python unittest from printing test docstring? - python

I've noticed that, when my Python unit tests contain documentation at the top of the function, sometimes the framework prints them in the test output. Normally, the test output contains one test per line:
<test name> ... ok
If the test has a docstring of the form
"""
test that so and so happens
"""
than all is well. But if the test has a docstring all on one line:
"""test that so and so happens"""
then the test output takes more than one line and includes the doc like this:
<test name>
test that so and so happens ... ok
I can't find where this is documented behavior. Is there a way to turn this off?

The first line of the docstring is used; the responsible method is TestCase.shortDescription(), which you can override in your testcases:
class MyTests(unittest.TestCase):
# ....
def shortDescription(self):
return None
By always returning None you turn the feature off entirely. If you want to format the docstring differently, it's available as self._testMethodDoc.

This is an improved version of MartijnPieters excellent answer.
Instead of overriding that method for every test, it is more convenient (at least for me) to add the following file to your list of tests. Name the file test_[whatever you want].py.
test_config.py
import unittest
# Hides Docstring from Verbosis mode
unittest.TestCase.shortDescription = lambda x: None
This code snippet could also be placed in the __init__.py files of the test folder.
In my case, I just added to the root folder of my project, scripts, since I use discover as in python -m unittest from scripts to run all the unittests of my project. As this is the only test*.py file on that directory level, it will load before any other test.
(I tried the snippet on the __init__.py of the root folder, it didn't seem to work, so I sticked with the file approach)
BTW: I actually prefer lambda x: "\t" instead of lambda x: None

After reading this I made a plugin for nosetests to avoid the boilerplate.
https://github.com/MarechJ/nosenodocstrings

Related

Python mock patch tests work separately but not combined

I have written units test using unittest.mock and I am facing some issues with running tests together and singularly. I am mocking jira.JIRA from the JIRA SDK (on PyPI). My tests look something like:
import ...
class MyTests(BaseTest):
setUp ...
tearDown ...
#mock.patch('os.system')
#mock.patch('jira.JIRA')
def test_my_lambda(self, mock_jira, mock_os) -> None:
# run some code
mock_jira().transition_issue.assert_called_once()
mock_jira().transition_issue.assert_called_with(param1, param2)
# other mock conditions
Now of course from my understanding, the MagicMock returned and I can do sub calls on it etc and I can still call assertions on those sub functions etc. This test runs in isolation, however when running many tests together it says that the transition_issue call was not called, despite me going through the code and seeing it is called.
Am I missing something?
I tried to look into resetting the mock after each test, but I assumed the patch took care of that.
I also tried to to do imports relating to the jira.JIRA within the function (anything which imports my class which has the jira.JIRA() call in), but this did not work.
The fact it works as a single test is confusing me.

How to execute test from inside a class in pytest?

I am trying to run a test method which is inside a class in Python using pytest framework.
Not sure what is going wrong, but the test is not getting picked. I made sure the package name, module name, class name and function name starts with "test".
There is no content inside init.py, I am not sure if I need to include anything inside this file to make sure the test(s) are picked which are under the class.
The interpreter I am using is shown in the screenshot. Also, I have added a screenshot showing the code so it becomes easier to understand the directory structure.
I visited several blogs and this, but none of them helped me resolve this.
Could you please help?
By default, pytest expects test classes to be named in CamelCase: TestDemo, not test_demo. The rest of your names follow the correct schema for the defaults, so if you change the class name to TestDemo, pytest should be able to find it.
pytest docs on test discovery: https://docs.pytest.org/en/stable/example/pythoncollection.html
To test a function in a class with pytest, we can use this command:
$ pytest -v /path/to/test_file.py::ClassName::test_function_name
remember use "::"

Django Test Case Can't run method

I am just getting started with Django, so this may be something stupid, but I am not even sure what to google at this point.
I have a method that looks like this:
def get_user(self,user):
return Utilities.get_userprofile(user)
The method looks like this:
#staticmethod
def get_userprofile(user):
return UserProfile.objects.filter(user_auth__username=user)[0]
When I include this in the view, everything is fine. When I write a test case to use any of the methods inside the Utility class, I get None back:
Two test cases:
def test_stack_overflow(self):
a = ObjName()
print(a.get_user('admin'))
def test_Utility(self):
print(Utilities.get_user('admin'))
Results:
Creating test database for alias 'default'...
None
..None
.
----------------------------------------------------------------------
Can someone tell me why this is working in the view, but not working inside of the test case and does not generate any error messages?
Thanks
Verify whether your unit test comply the followings,
TestClass must be written in a file name test*.py
TestClass must have been subclassed from unittest.TestCase
TestClass should have a setUp function to create objects(usually done in this way, but objects creation can happen in the test functions as well) in the database
TestClass functions should start with test so as to be identified and run by the ./manage.py test command
TestClass may have tearDown to properly end the unit test case.
Test Case Execution process:
When you run ./manage.py test django sets up a test_your_database_name and creates all the objects mentioned in the setUp function(Usually) and starts executing the test functions in the order of placement inside the class and once when all the test functions are executed, finally looks for the tearDown function executes if any present in the test class and destroys the test database.
It may be because that you might not have invoked objects creation in setUp function or elsewhere in the TestClass.
Can you kindly post the entire traceback and test file to help you better?

How do I register a mark in pytest 2.5.1?

I've the read pytest documentation. Section 7.4.3 gives instructions for registering markers. I have followed the instructions exactly, but it doesn't seem to have worked for me.
I'm using Python 2.7.2 and pytest 2.5.1.
I have a pytest.ini file at the root of my project. Here is the entire contents of that file:
[pytest]
python_files=*.py
python_classes=Check
python_functions=test
rsyncdirs = . logs
rsyncignore = docs archive third_party .git procs
markers =
mammoth: mark a test as part of the Mammoth regression suite
A little background to give context: The folks that created the automation framework I am working on no longer work for the company. They created a custom plugin that extended the functionality of the default pytest.mark. From what I understand, the only thing the custom plugin does is make it so that I can add marks to a test like this:
#pytest.marks(CompeteMarks.MAMMOTH, CompeteMarks.QUICK_TEST_A, CompeteMarks.PROD_BVT)
def my_test(self):
instead of like this:
#pytest.mark.mammoth
#pytest.mark.quick_test_a
#pytest.mark.prod_bvt
def my_test(self):
The custom plugin code remains present in the code base. I do not know if that has any negative effect on trying to register a mark, but thought it was worth mentioning if someone knows otherwise.
The problem I'm having is when I execute the following command on a command-line, I do NOT see my mammoth mark listed among the other registered marks.
py.test --markers
The output returned after running the above command is this:
#pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
#pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
#pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: #parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
#pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
#pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
#pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
What am I doing wrong and how can I get my mark registered?
One more piece of info, I have applied the mammoth mark to a single test (shown below) when I ran the py.test --markers command:
#pytest.mark.mammoth
def my_test(self):
If I understand your comments correctly the project layout is the following:
~/customersites/
~/customersites/automation/
~/customersites/automation/pytest.ini
Then invoking py.test as follows:
~/customersites$ py.test --markers
will make py.test look for a configuration file in ~/customersites/ and subsequently all the parents: ~/, /home/, /. In this case this will not make it find pytest.ini.
However when you invoke it with one or more arguments, py.test will try to interpret each argument as a file or directory and start looking for a configuration file from that directory and it's parents. It then iterates through all arguments in order until it found the first configuration file.
So with the above directory layout invoking py.test as follows will make it find pytest.ini and show the markers registered in it:
~/customersites$ py.test automation --markers
as now py.test will first look in ~/customersites/automation/ for a configuration file before going up the directory tree and looking in ~/customersites/. But since it finds one in ~/customersites/automation/pytest.ini it stops there and uses that.
Have you tried here?
From the docs:
API reference for mark related objects
class MarkGenerator[source]
Factory for MarkDecorator objects - exposed as a pytest.mark singleton
instance.
Example:
import py
#pytest.mark.slowtest
def test_function():
pass
will set a slowtest MarkInfo object on the test_function object.
class MarkDecorator(name, args=None, kwargs=None)[source]
A decorator for test functions and test classes. When applied it will
create MarkInfo objects which may be retrieved by hooks as item keywords.
MarkDecorator instances are often created like this:
mark1 = pytest.mark.NAME # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions:
#mark2
def test_function():
pass

python unittest faulty module

I have a module that I need to test in python.
I'm using the unittest framework but I ran into a problem.
The module has some method definitions, one of which is used when it's imported (readConfiguration) like so:
.
.
.
def readConfiguration(file = "default.xml"):
# do some reading from xml
readConfiguration()
This is a problem because when I try to import the module it also tries to run the "readConfiguration" method which fails the module and the program (a configuration file does not exist in the test environment).
I'd like to be able to test the module independent of any configuration files.
I didn't write the module and it cannot be re-factored.
I know I can include a dummy configuration file but I'm looking for a "cleaner", more elegant, solution.
As commenters have already pointed out, imports should never have side effects, so try to get the module changed if at all possible.
If you really, absolutely, cannot do this, there might be another way: let readConfiguration() be called, but stub out its dependencies. For instance, if it uses the builtin open() function, you could mock that, as demonstrated in the mock documentation:
>>> mock = MagicMock(return_value=sentinel.file_handle)
>>> with patch('builtins.open', mock):
... import the_broken_module
... # do your testing here
Replace sentinel.file_handle with StringIO("<contents of mock config file>") if you need to supply actual content.
It's brittle as it depends on the implementation of readConfiguration(), but if there really is no other way, it might be useful as a last resort.

Categories