In Python 3.10, I have a function like:
from shutil import which
def my_func():
if which('myexecutable.sh'):
# do stuff
else:
# do other stuff
I would like to write a unit test with Pytest that runs the first part code even though the executable is not present. What is the best way to do this?
I know that I can use monkeypatch.setenv() to set an environment variable, but that's not going to make the which() check pass. There's also the added challenge of making sure this is compatible on Windows and Linux.
You could try like this:
# in script file
from shutil import which
def myfunc():
if which("myexecutable.sh"):
return "OK"
else:
...
# in test file
import pytest
from script import myfunc
#pytest.fixture
def which(mocker):
return mocker.patch("script.which", autospec=True)
def test_myfunc(which):
assert myfunc() == "OK"
Running pytest outputs: 1 passed
Related
I use pathlib.Path.cwd() as a default argument in a function signature.
def foobar(dir_arg=pathlib.Path.cwd() / 'logs'):
# ...
When I unittest this function with pyfakefs the argument isn't patched. But the patch_default_args is set to True.
Here is the MWE.
#!/usr/bin/env python3
import pyfakefs.fake_filesystem_unittest as pyfakefs_ut
import pathlib
class Logging(pyfakefs_ut.TestCase):
def setUp(self):
print('PyFakeFS activated')
self.setUpPyfakefs(
allow_root_user=False,
patch_default_args=True)
def test_foobar(self):
foobar()
def foobar(dir_arg=pathlib.Path.cwd() / 'logs'):
dir_local = pathlib.Path.cwd() / 'logs'
print(f'dir_arg: {dir_arg}')
print(f'dir_local: {dir_local}')
if __name__ == '__main__':
print('Without PyFakeFS')
foobar()
Run this as a test (with activated pyfakefs):
python3 -m unittest x.py
PyFakeFS activated
dir_arg: /home/user/tab-cloud/_transfer/logs
dir_local: /logs
.
----------------------------------------------------------------------
Ran 1 test in 0.744s
OK
Run this usual without pyfakefs
./x.py
Without PyFakeFS
dir_arg: /home/user/tab-cloud/_transfer/logs
dir_local: /home/user/tab-cloud/_transfer/logs
The expected output when run as a test would be
PyFakeFS activated
dir_arg: /logs
dir_local: /logs
There is also an open Issue about that problem. But now I think this isn't a bug but more a problem in front of the monitor.
My answer is based on the feedback of PyFakeFS's maintainer.
The question fits to an edge case that is not accounted by the patch_default_args argument. It patches filesystem functions but not classes (as in my case).
A solution is to use the modules_to_reload argument.
To demonstrate the solution with code I separated the MWE from the question into two files.
Here is x.py:
#!/usr/bin/env python3
import pathlib
def foobar(dir_arg=pathlib.Path.cwd() / 'logs'):
dir_local = pathlib.Path.cwd() / 'logs'
print(f'dir_arg: {dir_arg}')
print(f'dir_local: {dir_local}')
if __name__ == '__main__':
print('Without PyFakeFS')
foobar()
This is test_x.py
#!/usr/bin/env python3
import pyfakefs.fake_filesystem_unittest as pyfakefs_ut
import pathlib
import x
class Logging(pyfakefs_ut.TestCase):
def setUp(self):
print('PyFakeFS activated')
self.setUpPyfakefs(
allow_root_user=False,
modules_to_reload=[x])
def test_foobar(self):
x.foobar()
Reload order
When you use modules_to_reload the load order is also important in some edge cases. An example and a solution can be found here.
Regarding to the MWE here. When foobar() would be located in a sub-module (sub_x.py) which is imported implicite in the x/__init__.py then the sub-module should be loaded first.
modules_to_reload=[x.sub_x, x]
I have the following project structure:
root/
|-mylib/
|-tests/
| |-__init__.py
| |-test_trend.py
|-__init__.py
|-data.py
|-trend.py
In trend.py:
from mylib.data import get_item
def get_trend(input: str):
item = get_item(input)
return f"Trend is: {item}"
Inside data.py:
def get_item(id):
print("ORIGINAL")
return get_item_from_db(id)
Testing
I want to test get_trend in isolation, therefore I patch get_item inside: test_trend.py:
import unittest
from unittest.mock import patch
from mylib.trend import get_trend
def p__get_item(input):
return 0
class MyTestCase(unittest.TestCase):
#patch("mylib.data.get_item", new=p__get_item)
def test_trend(self):
v = get_trend()
self.assertEqual(v, "Trend is: 0")
But when I run the tests (the command is run from inside the root directory):
python -m unittest discover
I see that the log shows that the original get_item is called. The test is failing of course.
What am I doing wrong?
Attempt 1
If I try this other flavor of the patch APIL
class MyTestCase(unittest.TestCase):
#patch("mylib.data.get_item")
def test_trend(self, mocked_fun):
mocked_fun.return_value = 0
v = get_trend()
self.assertEqual(v, "Trend is: 0")
It still does not work. In the console log I can see ORIGINAL being printed and the test fails.
Experiment 1
If I change the target in #patch to a non existing attribute like:
#patch("mylib.data.non_existing", new=p__get_item)
I actually get an error from the library saying the module does not contain such attribute. So, it seems like mylib.data.get_item is correctly being targeted, but still the patching is not happening.
Explanation
get_item is called in mylib.trend module, however, you patched it in mylib.data module, which is the wrong place. patch works by replacing objects on import. In this case, you want to patch get_item inside mylib.trend module not mylib.data.
Solution
class MyTestCase(unittest.TestCase):
#patch("mylib.trend.get_item")
def test_trend(self, mocked_get_item):
mocked_get_item.return_value = 0
...
Notes
Further explanation on where to patch can be found on the official python documentation here.
I have one function which does the some process and if fails then i am existing the code using sys.exit(myfun()) but when i am testing using pytest i dont want to execute function myfun() inside sys.exit(). is there anyway in pytest it can be possible skip myfun()?
mypyfile.py
def process():
# do some logic
if failed:
sys.exit(myfun()) # this i dont want to execute if i test via my pytest
def myfun():
print("failed")
test_mypyfile.py
import pytest
import test_mypyfile
def test_process():
test_mypyfile.process()
You could mock myfunc in your test. This way the function is not called in the test process. I'll change a bit you're example so that it make more sense to mock (and you'll be able to see the difference):
# mypyfile.py
from time import sleep
def process():
# do some logic
if failed:
sys.exit(myfunc)
def myfun():
sleep(4000)
# test_mypyfile.py
import mock
import mypyfile
def test_process():
mock.patch('mypyfile.myfunc')
test_mypyfile.process()
Here myfunc will be called but you won't wait the 4000s.
How can I get test_greet to run in the below; note: test_one(when uncommented) is seen and run by the test runner; to be specific, I want the line unittest.main() to correctly pick up the module level test (test_greet).
import unittest
#class MyTests(unittest.TestCase):
# def test_one(self):
# assert 1==2
def test_greet():
assert 1==3
if __name__=="__main__":
unittest.main()
Let's say i have a file called MyTests.py as below:
import unittest
class MyTests(unittest.TestCase):
def test_greet(self):
self.assertEqual(1,3)
Then:
Open a CMD in the folder that MyTests.py exists
Run python -m unittest MyTests
Please note that, all your tests must have the test_ otherwise, it will not be run.
Is there a way to save the value of parameter, provided by pytest fixture:
Here is an example of conftest.py
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption("--parameter", action="store", default="default",
help="configuration file path")
#pytest.fixture
def param(request):
parameter = request.config.getoption("--parameter")
return parameter
Here is an example of pytest module:
# content of my_test.py
def test_parameters(param):
assert param == "yes"
OK - everything works fine, but is there a way to get the value of param outside the test - for example with some build-in pytest function pytest.get_fixture_value["parameter"]
EDITED - DETAILED EXPLANATION WHAT I WANT TO ACHIEV
I am writing an module, that deploys and after that provides parameters to tests, writen in pytest. My idea is if someones test looks like that:
class TestApproachI:
#load_params_as_kwargs(parameters_A)
def setup_class(cls, param_1, param_2, ... , param_n):
# code of setup_class
def teardown_class(cls):
# some code
def test_01(self):
# test code
And this someone gives me a configuration file, that explains with what parameters to run his code, I will analyze those parameters (in some other script) and I will run his tests with the command pytest --parameters=path_to_serialized_python_tuple test_to_run where this tuple will contain the provided values for this someone parameters in the right order. And I will tell that guy (with the tests) to add this decorator to all the tests he wants me to provide parameters. This decorator would look like this:
class TestApproachI:
# this path_to_serialized_tuple should be provided by 'pytest --parameters=path_to_serialized_python_tuple test_to_run'
#load_params(path_to_serialized_tuple)
def setup_class(cls, param_1, param_2, ... , param_n):
# code of setup_class
def teardown_class(cls):
# some code
def test_01(self):
# test code
The decorator function should look like that:
def load_params(parameters):
def decorator(func_to_decorate):
#wraps(func_to_decorate)
def wrapper(self):
# deserialize the tuple and decorates replaces the values of test parameters
return func_to_decorate(self, *parameters)
return wrapper
return decorator
Set that parameter as os environment variable, and than use it anywhere in your test through os.getenv('parameter')
So, you can use like,
#pytest.fixture
def param(request):
parameter = request.config.getoption("--parameter")
os.environ["parameter"]=parameter
return parameter
#pytest.mark.usefixtures('param')
def test_parameters(param):
assert os.getenv('parameter') == "yes"
I am using pytest-lazy-fixture to get the value any fixture:
first install it using pip install pytest-lazy-fixture or pipenv install pytest-lazy-fixture
then, simply assign the fixture to a variable like this if you want:
fixture_value = pytest.lazy_fixture('fixture')
the fixture has to wrapped with quotations
You can use the pytest function config.cache, like this
def function_1(request):
request.config.cache.set("user_data", "name")
...
def function_2(request):
request.config.cache.get("user_data", None)
...
Here is more info about it
https://docs.pytest.org/en/latest/reference/reference.html#std-fixture-cache
https://docs.pytest.org/en/6.2.x/cache.html