Disable mypy for test functions - python

Is it possible disable mypy checking for functions which for a testing? Such as starts from test_?
For me is redundant adding typing for tests
def test_dummy(): # <- mypy is raising "error: Function is missing a return type annotation"
# Empty test

Related

pylint 'unused_variable' warning on function definition

I have defined a function in __init__.py.
While running pylint on __init__.py, it gives a warning about unused_variable on the function_name.
I don't want to disable this warning in general (for actual unused variables, etc.) but obviously I don't want a warning for the function definition. Sample __init__.py:
import os
test_enabled = True
run_all_tests = False ## Raises 'unused_variable' warning, expected and want to keep such warnings
def is_test_enabled(): ## Raises 'unused_variable' warning, unexpected and want to disable
return <test_object> if test_enabled else None

Python: unused argument needed for compatibility. How to avoid Pylint complaining about it

For my code in Python, I would like to call many functions with a specific argument. However, for some functions, that argument does not do anything. Still, I would like to add the argument for compatibility reasons. For example, consider the following minimal working example:
def my_function(unused=False):
""" Function with unused argument. """
return True
Obviously, the argument unused is not used, so Pylint throws a warning:
W0613: Unused argument 'redundant' (unused-argument)
My point is that I do not want to remove the argument unused, because the function my_function will be called in a similar way as many other functions for which unused is used.
My question: How can I avoid the warning from Pylint without removing the optional argument?
Option 1: I can think of two options, but these options do not satisfy me. One option is to add some useless code, such that unused is used, e.g.,
def my_function(unused=False):
""" Function with unused argument. """
if unused:
dummy = 10
del dummy
return True
This feels as a waste of resources and it only clutters the code.
Option 2: The second option is to suppress the warning, e.g., like this:
def my_function(unused=False):
""" Function with unused argument. """
# pylint: disable=unused-argument
return True
I also do not really like this option, because usually Pylint warnings are a sign of bad style, so I am more looking to a different way of coding that avoids this warning.
What other options do I have?
I do not believe disabling some pylint warnings is bad style, as long as it is done carefully with clear intent and as specific as possible. For this purpose it is important to activate the useless-suppression check. When it is active pylint will warn you if some messages are locally disabled for no good reason. Add this to your .pylintrc:
[MESSAGES CONTROL]
enable=useless-suppression
For example I would recommend disabling the exact occurrence of the issue like in the following example:
def my_function(
used,
unused=False, # pylint: disable=unused-argument
):
""" Function with unused argument. """
return used
Adding a leading underscore should also keep pylint from triggering:
def my_function(used, _unused=False):
""" Function with unused argument. """
return used
Another commonly used pattern is the following:
def my_function(used, unused_a, unused_b=False):
""" Function with unused argument. """
_ = (unused_a, unused_b,)
return used
It would be possible to work out a solution by playing around with **kwargs. For example:
def _function_a(one, two=2):
return one + two
def _function_b(one, **kwargs):
return one + kwargs['two']
def _function_c(one, **_kwargs):
return one
def _main():
for _function in [_function_a, _function_b, _function_c]:
print(_function.__name__, _function(1, two=4))

Possible to skip a test depending on fixture value?

I have a lot of tests broken into many different files. In my conftest.py I have something like this:
#pytest.fixture(scope="session",
params=["foo", "bar", "baz"])
def special_param(request):
return request.param
While the majority of the tests work with all values, some only work with foo and baz. This means I have this in several of my tests:
def test_example(special_param):
if special_param == "bar":
pytest.skip("Test doesn't work with bar")
I find this a little ugly and was hoping for a better way to do it. Is there anyway to use the skip decorator to achieve this? If not, is it possible to right my own decorator that can do this?
You can, as one of the comments from #abarnert, write a custom decorator using functools.wraps for this purpose exactly. In my example below, I am skipping a test if a fixture configs (some configuration dictionary) value for report type is enhanced, versus standard (but it could be whatever condition you want to check).
For example, here's an example of a fixture we'll use to determine whether to skip a test or not:
#pytest.fixture
def configs()-> Dict:
return {"report_type": "enhanced", "some_other_fixture_params": 123}
Now we write a decorator that will skip the test by inspecting the fixture configs contents for its report_type key value:
def skip_if_report_enhanced(test_function: Callable) -> Callable:
#wraps(test_function)
def wrapper(*args, **kwargs):
configs = kwargs.get("configs") # configs is a fixture passed into the pytest function
report_type = configs.get("report_type", "standard")
if report_type is ReportType.ENHANCED:
return pytest.skip(f"Skipping {test_function.__name__}") # skip!
return test_function(*args, **kwargs) # otherwise, run the pytest
return wrapper # return the decorated callable pytest
Note here that I am using kwargs.get("configs") to pull the fixture out here.
Below the test itself, the logic of which is irrelevant, just that the test runs or not:
#skip_if_report_enhanced
def test_that_it_ran(configs):
print("The test ran!") # shouldn't get here if the report type is set to enhanced
The output from running this test:
============================== 1 skipped in 0.55s ==============================
Process finished with exit code 0 SKIPPED
[100%] Skipped: Skipping test_that_it_ran
One solution is to override the fixture with #pytest.mark.parametrize. For example
#pytest.mark.parametrize("special_param", ["foo"])
def test_example(special_param):
# do test
Another possibility is to not use the special_param fixture at all and explicitly use the value "foo" where needed. The downside is that this only works if there are no other fixtures that also rely on special_param.

How to disable a try/except block during testing?

I wrote a cronjob that iterates through a list of accounts and performs some web call for them (shown below):
for account in self.ActiveAccountFactory():
try:
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
except Exception,e:
self.logger.exception(traceback.format_exc())
Because this job is run by heroku one every 10 minutes, I do not want the entire job to fail just because one account is running into issues (it happens). I placed a try catch clause here so that this task is "fault-tolerant".
However, I noticed that when I am testing, this try/catch block is giving me cryptic problems because of the task is allowed to continue executing even though there is some serious error.
What is the best way to disable a try/except block during testing?
I've though about implementing the code directly like this:
for account in self.ActiveAccountFactory():
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
self.logger.exception(traceback.format_exc())
in my test cases but then this makes my tests very clumsy as I am copying large amounts of code over.
What should I do?
First of all: don't swallow all exceptions using except Exception. It's bad design. So cut it out.
With that out of the way:
One thing you could do is setup a monkeypatch for the logger.exception method. Then you can handle the test however you see fit based on whether it was called, whether it's creating a mock logger, or a separate testing logger, or a custom testing logger class that stops the tests when certain exceptions occur. You could even choose to end the testing immediately by raising an error.
Here is an example using pytest.monkeypatch. I like pytest's way of doing this because they already have a predefined fixture setup for it, and no boilerplate code is required. However, there are others ways to do this as well (such as using unittest.mock.patch as part of the unitest module).
I will call your class SomeClass. What we will do is create a patched version of your SomeClass object as a fixture. The patched version will not log to the logger; instead, it will have a mock logger. Anything that happens to the logger will be recorded in the mock logger for inspection later.
import pytest
import unittest.mock as mock # import mock for Python 2
#pytest.fixture
def SomeClassObj_with_patched_logger(monkeypatch):
##### SETUP PHASE ####
# create a basic mock logger:
mock_logger = mock.Mock(spec=LoggerClass)
# patch the 'logger' attribute so that when it is called on
# 'some_class_instance' (which is bound to 'self' in the method)
# things are re-routed to mock_logger
monkeypatch.setattr('some_class_instance.logger', mock_logger)
# now create class instance you will test with the same name
# as the patched object
some_class_instance = SomeClass()
# the class object you created will now be patched
# we can now send that patched object to any test we want
# using the standard pytest fixture way of doing things
yield some_class_instance
###### TEARDOWN PHASE #######
# after all tests have been run, we can inspect what happened to
# the mock logger like so:
print('\n#### ', mock_logger.method_calls)
If call.exception appears in the method calls of the mock logger, you know that method was called. There are a lot of other ways you could handle this as well, this is just one.
If you're using the logging module, LoggerClass should just be logging.Logger. Alternatively, you can just do mock_logger = mock.Mock(). Or, you could create your own custom testing logger class that raises an exception when its exception method is called. The sky is the limit!
Use your patched object in any test like so:
def test_something(SomeClassObj_with_patched_logger):
# no need to do the line below really, just getting
# a shorter variable name
my_obj = SomeClassObj_with_patched_logger
#### DO STUFF WITH my_obj #####
If you are not familiar with pytest, see this training video for a little bit more in depth information.
try...except blocks are difficult when you are testing because they catch and try to dispose of errors you would really rather see. As you have found out. While testing, for
except Exception as e:
(don't use Exception,e, it's not forward-compatible) substitute an exception type that is really unlikely to occur in your circumstances, such as
except AssertionError as e:
A text editor will do this for you (and reverse it afterwards) at the cost of a couple of mouse-clicks.
You can make callables test-aware by add a _testing=False parameter. Use that to code alternate pathways in the callable for when testing. Then pass _testing=True when calling from a test file.
For the situation presented in this question, putting if _testing: raise in the exception body would 'uncatch' the exception.
Conditioning module level code is tricker. To get special behavior when testing module mod in package pack, I put
_testing = False # in `pack.__init__`
from pack import _testing # in pack.mod
Then test_mod I put something like:
import pack
pack._testing = True
from pack import mod

Check that a function raises a warning with nose tests

I'm writing unit tests using nose, and I'd like to check whether a function raises a warning (the function uses warnings.warn). Is this something that can easily be done?
def your_code():
# ...
warnings.warn("deprecated", DeprecationWarning)
# ...
def your_test():
with warnings.catch_warnings(record=True) as w:
your_code()
assert len(w) > 1
Instead of just checking the lenght, you can check it in-depth, of course:
assert str(w.args[0]) == "deprecated"
In python 2.7 or later, you can do this with the last check as:
assert str(w[0].message[0]) == "deprecated"
There are (at least) two ways of doing this. You can catch the warning in the list of warnings.WarningMessages in test or use mock to patch the imported warnings in your module.
I think the patch version is more general.
raise_warning.py:
import warnings
def should_warn():
warnings.warn('message', RuntimeWarning)
print('didn\'t I warn you?')
raise_warning_tests.py:
import unittest
from mock import patch
import raise_warning
class TestWarnings(unittest.TestCase):
#patch('raise_warning.warnings.warn')
def test_patched(self, mock_warnings):
"""test with patched warnings"""
raise_warning.should_warn()
self.assertTrue(mock_warnings.called)
def test_that_catches_warning(self):
"""test by catching warning"""
with raise_warning.warnings.catch_warnings(True) as wrn:
raise_warning.should_warn()
# per-PEP8 check for empty sequences by their Truthiness
self.assertTrue(wrn)

Categories