I want to test appendRole which is called getFileAsJson to read file with open.
My problem is that I don't know which open will be next. There are many if/elif.
def appendRole(self, hosts=None, _newrole=None, newSubroles=None, undoRole=False, config_path=None):
""" Same as changeRole but keeps subroles """
if hosts is None:
hosts = ["127.0.0.1"]
if newSubroles is None:
newSubroles = {}
if config_path is None:
config_path = self.config_path
with self._lock:
default = {}
data = self.getFileAsJson(config_path, default)
...................
...................
...................
...................
data1 = self.getFileAsJson(self.config_path_all, {"some"})
data2 = self.getFileAsJson(self.config_path_core, {"something"})
...................
...................
...................
def getFileAsJson(self, config_path, init_value):
"""
read file and return json data
if it wasn't create. Will created.
"""
self.createFile(config_path, init_value)
try:
with open(config_path, "r") as json_data:
data = json.load(json_data)
return data
except Exception as e:
self.logAndRaiseValueError(
"Can't read data from %s because %s" % (config_path, e))
Even you can find an answer to your question at Python mock builtin 'open' in a class using two different files I would like encourage you to change your approach to write tests for getFileAsJson() and then trust it.
To test appendRole() use mock.patch to patch getFileAsJson(), then by side_effect attribute you can instruct the mock to return exactly what you need for your test.
So, after some test on getFileAsJson() where you can use mock_open() to mock open builtin (maybe you need to patch createFile() too). Your appendRole()'s test looks like something like this:
#mock.patch('mymodule.getFileAsJson', autospec=True)
def test_appendRole(self, mock_getFileAsJson)
mock_getFileAsJson.side_effect = [m_data, m_data1,m_data2,...]
# where m_data, m_data1,m_data2, ... is what is supposed
# getFileAsJson return in your test
# Invoke appendRole() to test it
appendRole(bla, bla)
# Now you can use mock_getFileAsJson.assert* family methods to
# check how your appendRole call it.
# Moreover add what you need to test in appendRole()
Related
I am testing a function doing two times a "requests.post". Si I stub the call to this function. However, I want that the first stub return a fake data, and the second stub another fake data. How to do it ?
Currently, I have :
#mock.patch('requests.post', side_effect=mocked_object)
def test_function_ok(self, mock_post):
...
And I want something like this :
#mock.patch_once('requests.post', side_effect=mocked_1)
#mock.patch_once('requests.post', side_effect=mocked_2)
def test_function_ok(self, mock_post):
...
Thanks to MrBean Bremen. However, I must call the elements in the list, like this :
#mock.patch('requests.post', side_effect=[mocked_1(), mocked_2()])
def test_function_ok(self, mock_post):
...
You can send different data to be mocked to the request twice it will mock the method and return the expected data that is specified in return_value
#patch('the_function_path_you_want_to_mock')
def test_data(self, mock_obj):
mock_obj.return_value.function_to_be_mocked.return_value = [data_to_mocked]
response = self.client.post('/request', data=json.dumps(self.input_data), content_type='application/vnd.api+json', )
mock_obj.return_value.function_to_be_mocked.return_value = [data_to_mocked]
response = self.client.post('/request', data=json.dumps(self.input_data), content_type='application/vnd.api+json', )
The Python API is available to read objects from a cluster. By cloning we can say:
Get a copy of an existing Kubernetes object using kubectl get
Change the properties of the object
Apply the new object
Until recently, the option to --export api was deprecated in 1.14. How can we use the Python Kubernetes API to do the steps from 1-3 described above?
There are multiple questions about how to extract the code from Python API to YAML, but it's unclear how to transform the Kubernetes API object.
Just use to_dict() which is now offered by Kubernetes Client objects. Note that it creates a partly deep copy. So to be safe:
copied_obj = copy.deepcopy(obj.to_dict())
Dicts can be passed to create* and patch* methods.
For convenience, you can also wrap the dict in Prodict.
copied_obj = Prodict.from_dict(copy.deepcopy(obj.to_dict()))
The final issue is getting rid of superfluous fields. (Unfortunately, Kubernetes sprinkles them throughout the object.) I use kopf's internal facility for getting the "essence" of an object. (It takes care of the deep copy.)
copied_obj = kopf.AnnotationsDiffBaseStorage().build(body=kopf.Body(obj.to_dict()))
copied_obj = Prodic.from_dict(copied_obj)
After looking at the requirement, I spent a couple of hours researching the Kubernetes Python API. Issue 340 and others ask about how to transform the Kubernetes API object into a dict, but the only workaround I found was to retrieve the raw data and then convert to JSON.
The following code uses the Kubernetes API to get a deployment and its related hpa from the namespaced objects, but retrieving their raw values as JSON.
Then, after transforming the data into a dict, you can alternatively clean up the data by removing null references.
Once you are done, you can transform the dict as YAML payload to then save the YAML to the file system
Finally, you can apply either using kubectl or the Kubernetes Python API.
Note:
Make sure to set KUBECONFIG=config so that you can point to a cluster
Make sure to adjust the values of origin_obj_name = "istio-ingressgateway" and origin_obj_namespace = "istio-system" with the name of the corresponding objects to be cloned in the given namespace.
import os
import logging
import yaml
import json
logging.basicConfig(level = logging.INFO)
import crayons
from kubernetes import client, config
from kubernetes.client.rest import ApiException
LOGGER = logging.getLogger(" IngressGatewayCreator ")
class IngressGatewayCreator:
#staticmethod
def clone_default_ingress(clone_context):
# Clone the deployment
IngressGatewayCreator.clone_deployment_object(clone_context)
# Clone the deployment's HPA
IngressGatewayCreator.clone_hpa_object(clone_context)
#staticmethod
def clone_deployment_object(clone_context):
kubeconfig = os.getenv('KUBECONFIG')
config.load_kube_config(kubeconfig)
v1apps = client.AppsV1beta1Api()
deployment_name = clone_context.origin_obj_name
namespace = clone_context.origin_obj_namespace
try:
# gets an instance of the api without deserialization to model
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
deployment = v1apps.read_namespaced_deployment(deployment_name, namespace, _preload_content=False)
except ApiException as error:
if error.status == 404:
LOGGER.info("Deployment %s not found in namespace %s", deployment_name, namespace)
return
raise
# Clone the object deployment as a dic
cloned_dict = IngressGatewayCreator.clone_k8s_object(deployment, clone_context)
# Change additional objects
cloned_dict["spec"]["selector"]["matchLabels"]["istio"] = clone_context.name
cloned_dict["spec"]["template"]["metadata"]["labels"]["istio"] = clone_context.name
# Save the deployment template in the output dir
context.save_clone_as_yaml(cloned_dict, "deployment")
#staticmethod
def clone_hpa_object(clone_context):
kubeconfig = os.getenv('KUBECONFIG')
config.load_kube_config(kubeconfig)
hpas = client.AutoscalingV1Api()
hpa_name = clone_context.origin_obj_name
namespace = clone_context.origin_obj_namespace
try:
# gets an instance of the api without deserialization to model
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
hpa = hpas.read_namespaced_horizontal_pod_autoscaler(hpa_name, namespace, _preload_content=False)
except ApiException as error:
if error.status == 404:
LOGGER.info("HPA %s not found in namespace %s", hpa_name, namespace)
return
raise
# Clone the object deployment as a dic
cloned_dict = IngressGatewayCreator.clone_k8s_object(hpa, clone_context)
# Change additional objects
cloned_dict["spec"]["scaleTargetRef"]["name"] = clone_context.name
# Save the deployment template in the output dir
context.save_clone_as_yaml(cloned_dict, "hpa")
#staticmethod
def clone_k8s_object(k8s_object, clone_context):
# Manipilate in the dict level, not k8s api, but from the fetched raw object
# https://github.com/kubernetes-client/python/issues/574#issuecomment-405400414
cloned_obj = json.loads(k8s_object.data)
labels = cloned_obj['metadata']['labels']
labels['istio'] = clone_context.name
cloned_obj['status'] = None
# Scrub by removing the "null" and "None" values
cloned_obj = IngressGatewayCreator.scrub_dict(cloned_obj)
# Patch the metadata with the name and labels adjusted
cloned_obj['metadata'] = {
"name": clone_context.name,
"namespace": clone_context.origin_obj_namespace,
"labels": labels
}
return cloned_obj
# https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570
#staticmethod
def scrub_dict(d):
new_dict = {}
for k, v in d.items():
if isinstance(v, dict):
v = IngressGatewayCreator.scrub_dict(v)
if isinstance(v, list):
v = IngressGatewayCreator.scrub_list(v)
if not v in (u'', None, {}):
new_dict[k] = v
return new_dict
# https://stackoverflow.com/questions/12118695/efficient-way-to-remove-keys-with-empty-strings-from-a-dict/59959570#59959570
#staticmethod
def scrub_list(d):
scrubbed_list = []
for i in d:
if isinstance(i, dict):
i = IngressGatewayCreator.scrub_dict(i)
scrubbed_list.append(i)
return scrubbed_list
class IngressGatewayContext:
def __init__(self, manifest_dir, name, hostname, nats, type):
self.manifest_dir = manifest_dir
self.name = name
self.hostname = hostname
self.nats = nats
self.ingress_type = type
self.origin_obj_name = "istio-ingressgateway"
self.origin_obj_namespace = "istio-system"
def save_clone_as_yaml(self, k8s_object, kind):
try:
# Just try to create if it doesn't exist
os.makedirs(self.manifest_dir)
except FileExistsError:
LOGGER.debug("Dir already exists %s", self.manifest_dir)
full_file_path = os.path.join(self.manifest_dir, self.name + '-' + kind + '.yaml')
# Store in the file-system with the name provided
# https://stackoverflow.com/questions/12470665/how-can-i-write-data-in-yaml-format-in-a-file/18210750#18210750
with open(full_file_path, 'w') as yaml_file:
yaml.dump(k8s_object, yaml_file, default_flow_style=False)
LOGGER.info(crayons.yellow("Saved %s '%s' at %s: \n%s"), kind, self.name, full_file_path, k8s_object)
try:
k8s_clone_name = "http2-ingressgateway"
hostname = "my-nlb-awesome.a.company.com"
nats = ["123.345.678.11", "333.444.222.111", "33.221.444.23"]
manifest_dir = "out/clones"
context = IngressGatewayContext(manifest_dir, k8s_clone_name, hostname, nats, "nlb")
IngressGatewayCreator.clone_default_ingress(context)
except Exception as err:
print("ERROR: {}".format(err))
Not python, but I've used jq in the past to quickly clone something with the small customisations required for each use case (usually cloning secrets into a new namespace).
kc get pod whatever-85pmk -o json \
| jq 'del(.status, .metadata ) | .metadata.name="newname"' \
| kc apply -f - -o yaml --dry-run
This is really easy to do with Hikaru.
Here is an example from my own open source project:
def duplicate_without_fields(obj: HikaruBase, omitted_fields: List[str]):
"""
Duplicate a hikaru object, omitting the specified fields
This is useful when you want to compare two versions of an object and first "cleanup" fields that shouldn't be
compared.
:param HikaruBase obj: A kubernetes object
:param List[str] omitted_fields: List of fields to be omitted. Field name format should be '.' separated
For example: ["status", "metadata.generation"]
"""
if obj is None:
return None
duplication = obj.dup()
for field_name in omitted_fields:
field_parts = field_name.split(".")
try:
if len(field_parts) > 1:
parent_obj = duplication.object_at_path(field_parts[:-1])
else:
parent_obj = duplication
setattr(parent_obj, field_parts[-1], None)
except Exception:
pass # in case the field doesn't exist on this object
return duplication
Dumping the object to yaml afterwards or re-applying it to the cluster is trivial with Hikaru
We're using this to clean up objects so that can show users a github-style diff when objects change, without spammy fields that change often like generation
Brand new to this library
Here is the call stack of my mocked object
[call(),
call('test'),
call().instance('test'),
call().instance().database('test'),
call().instance().database().snapshot(),
call().instance().database().snapshot().__enter__(),
call().instance().database().snapshot().__enter__().execute_sql('SELECT * FROM users'),
call().instance().database().snapshot().__exit__(None, None, None),
call().instance().database().snapshot().__enter__().execute_sql().__iter__()]
Here is the code I have used
#mock.patch('testmodule.Client')
def test_read_with_query(self, mock_client):
mock = mock_client()
pipeline = TestPipeline()
records = pipeline | ReadFromSpanner(TEST_PROJECT_ID, TEST_INSTANCE_ID, self.database_id).with_query('SELECT * FROM users')
pipeline.run()
print mock_client.mock_calls
exit()
I want to mock this whole stack that eventually it gives me some fake data which I will provide as a return value.
The code being tested is
spanner_client = Client(self.project_id)
instance = spanner_client.instance(self.instance_id)
database = instance.database(self.database_id)
with database.snapshot() as snapshot:
results = snapshot.execute_sql(self.query)
So my requirements is that the results variable should contain the data I will provide.
How can I provide a return value to such a nested calls
Thanks
Create separate MagicMock instances for the instance, database and snapshot objects in the code under test. Use return_value to configure the return values of each method. Here is an example. I simplified the method under test to just be a free standing function called mut.
# test_module.py : the module under test
class Client:
pass
def mut(project_id, instance_id, database_id, query):
spanner_client = Client(project_id)
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)
with database.snapshot() as snapshot:
results = snapshot.execute_sql(query)
return results
# test code (pytest)
from unittest.mock import MagicMock
from unittest import mock
from test_module import mut
#mock.patch('test_module.Client')
def test_read_with_query(mock_client_class):
mock_client = MagicMock()
mock_instance = MagicMock()
mock_database = MagicMock()
mock_snapshot = MagicMock()
expected = 'fake query results'
mock_client_class.return_value = mock_client
mock_client.instance.return_value = mock_instance
mock_instance.database.return_value = mock_database
mock_database.snapshot.return_value = mock_snapshot
mock_snapshot.execute_sql.return_value = expected
mock_snapshot.__enter__.return_value = mock_snapshot
observed = mut(29, 42, 77, 'select *')
mock_client_class.assert_called_once_with(29)
mock_client.instance.assert_called_once_with(42)
mock_instance.database.assert_called_once_with(77)
mock_database.snapshot.assert_called_once_with()
mock_snapshot.__enter__.assert_called_once_with()
mock_snapshot.execute_sql.assert_called_once_with('select *')
assert observed == expected
This test is kind of portly. Consider breaking it apart by using a fixture and a before function that sets up the mocks.
Either set the value directly to your Mock instance (those enters and exit should have not seen) with:
mock.return_value.instance.return_value.database.return_value.snapshot.return_value.execute_sql.return_value = MY_MOCKED_DATA
or patch and set return_value to target method, something like:
#mock.patch('database_engine.execute_sql', return_value=MY_MOCKED_DATA)
I have several test cases and test functions, and the list of test cases is different for the different functions. This can be easily done with pytest.mark.parametrize. The extra need I have is to load a resource (a file in my case) and I'd like to have this file only loaded once per test session and cached.
Below an example illustrating what I want. It's working, but I would like to find a way to use pytest fixtures or some other caching mechanism so that I don't have to do the caching myself and have the pars=load_file(pars) line in each test function.
Can someone please explain how to do this with pytest?
import pytest
case_1 = dict(label='case_1', spam=1)
case_2 = dict(label='case_2', spam=2)
case_3 = dict(label='case_3', spam=3)
_cache = {}
def load_file(pars):
if pars['label'] in _cache:
print('load_file from cache', pars)
return _cache[pars['label']]
else:
print('load_file loading', pars)
pars['file'] = pars['label'] + ' spam!'
_cache[pars['label']] = pars
return pars
#pytest.mark.parametrize('pars', [case_1, case_2])
def test_a(pars):
pars = load_file(pars)
print('test_a', pars)
#pytest.mark.parametrize('pars', [case_2, case_3])
def test_b(pars):
pars = load_file(pars)
print('test_b', pars)
#pytest.mark.parametrize('pars', [case1, case_2, case_3])
def test_c(pars):
pars = load_file(pars)
print('test_c', pars)
### more tests here for various combinations of test cases
The first and the obvious solution is to use the session-scoped fixtures. However, it requires restructuring the test file, and load all of the known files in advance.
import pytest
#pytest.fixture(scope='session')
def pars_all():
cache = {}
for case in [case_1, case_2, case_3]:
cache[case['label']] = 'case {} content'.format(case)
yield cache
# optionally destroy or unload or unlock here.
#pytest.fixture(scope='function')
def pars(request, pars_all):
label = request.param
yield pars_all[label]
#pytest.mark.parametrize('pars', ['case_1', 'case_2'], indirect=True)
def test(pars):
pass
Please note the indirect parametrisation. It means that the pars fixture will be prepared instead, getting a parameter value in request.param. The parameter name and the fixture must share the same name.
The session-scoped fixture (or module-scoped, or class-scoped if you wish) will be prepared only once for all the tests. It is important to note that the wider-scoped fixtures can be used in the more narrow-scoped or same-scoped fixtures, but not in the opposite direction.
If the cases are not that well-defined, it is the same easy, just the cache is populated on demand:
import pytest
#pytest.fixture(scope='session')
def pars_all():
yield {}
#pytest.fixture(scope='function')
def pars(request, pars_all):
label = request.param
if label not in pars_all:
print('[[[{}]]]'.format(request.param))
pars_all[label] = 'content of {}'.format(label)
yield pars_all[label]
#pytest.mark.parametrize('pars', ['case_1', 'case_2'], indirect=True)
def test_1(pars):
print(pars)
#pytest.mark.parametrize('pars', ['case_1', 'case_3'], indirect=True)
def test_2(pars):
print(pars)
Note, that the {} object is created only once, because it is session-scoped, and is shared among all tests & callspecs. So, if one fixture adds something into it, other fixtures will see it too. You can notice that on how case_1 is reused in the test_2:
$ pytest -s -v -ra test_me.py
======= test session starts ==========
...
collected 4 items
test_me.py::test_1[case_1] [[[case_1]]]
content of case_1
PASSED
test_me.py::test_1[case_2] [[[case_2]]]
content of case_2
PASSED
test_me.py::test_2[case_1] content of case_1
PASSED
test_me.py::test_2[case_3] [[[case_3]]]
content of case_3
PASSED
======== 4 passed in 0.01 seconds ==========
A simple use of #lru_cache in your file parsing function can also do the caching trick:
#lru_cache(maxsize=3)
def load_file(file_name):
""" This function loads the file and returns contents"""
print("loading file " + file_name)
return "<dummy content for " + file_name + ">"
You can also reach the same result while making the whole code a bit more readable by separating the test functions from the test cases with pytest-cases (I'm the author by the way!):
from functools import lru_cache
from pytest_cases import parametrize_with_cases
#lru_cache(maxsize=3)
def load_file(file_name):
""" This function loads the file and returns contents"""
print("loading file " + file_name)
return "<dummy content for " + file_name + ">"
def case_1():
return load_file('file1')
def case_2():
return load_file('file2')
def case_3():
return load_file('file3')
#parametrize_with_cases("pars", cases=[case_1, case_2])
def test_a(pars):
print('test_a', pars)
#parametrize_with_cases("pars", cases=[case_2, case_3])
def test_b(pars):
print('test_b', pars)
#parametrize_with_cases("pars", cases=[case_1, case_2, case_3])
def test_c(pars):
print('test_c', pars)
Yields:
loading file file1
test_a <dummy content for file1>PASSED
loading file file2
test_a <dummy content for file2>PASSED
test_b <dummy content for file2>PASSED
loading file file3
test_b <dummy content for file3>PASSED
test_c <dummy content for file1>PASSED
test_c <dummy content for file2>PASSED
test_c <dummy content for file3>PASSED
Finally note that depending on your use case you might wish to switch to a case generator by using #parametrize on a case function, that could be more readable:
from pytest_cases import parametrize
#parametrize("file_name", ["file1", "file2"])
def case_gen(file_name):
return load_file(file_name)
Also look at tags & filters, if you do not want to hardcode the cases explicitly.
I'm studying how to use mocking in my unit test program.
Now I have a SafeConfigParser object and I want to test what I write is correct.
After google the mocking usage of SafeConfigParser, I already know how to test the read of SafeConfigParser. But I still don't know how to verify the write of SafeConfigParser.
My idea is:
Make a empty buffer.
Consider a method that can set the buffer to SafeConfigParser.
Call the function which include SafeConfigParser.write()
Verify the buffer with my answer.
My program which need to be tested is like following:
def write_tokens_to_config(self):
"""Write token values to the config
"""
parser = SafeConfigParser()
with open(self.CONFIG_PATH) as fp:
parser.readfp(fp)
if not parser.has_section('Token'):
parser.add_section('Token')
parser.set('Token', 'access_token', self._access_token)
parser.set('Token', 'refresh_token', self._refresh_token)
with open(self.CONFIG_PATH, 'wb') as fp:
parser.write(fp)
P.S. You can check the read part from this url: http://www.snip2code.com/Snippet/4347/
I finally find out a solution :).
I modify my program(ex: program.py) to the followings:
class Program():
def __init__(self):
self._access_token = None
self._refresh_token = None
self.CONFIG_PATH = 'test.conf'
def write_tokens_to_config(self):
"""Write token value to the config
"""
parser = SafeConfigParser()
parser.read(self.CONFIG_PATH)
if not parser.has_section('Token'):
parser.add_section('Token')
parser.set('Token', 'access_token', self._access_token)
parser.set('Token', 'refresh_token', self._refresh_token)
with open(self.CONFIG_PATH, 'wb') as f:
parser.write(f)
And my test program like this:
class TestMyProgram(unittest.TestCase):
def setUp(self):
from program import Program
self.program = Program()
def test_write_tokens_to_config(self):
from mock import mock_open
from mock import call
self.program._access_token = 'aaa'
self.program._refresh_token = 'bbb'
with mock.patch('program.ConfigParser.SafeConfigParser.read'):
m = mock_open()
with mock.patch('__builtin__.open', m, create=True):
self.program.write_tokens_to_config()
m.assert_called_once_with(self.program.CONFIG_PATH, 'wb')
handle = m()
handle.write.assert_has_calls(
[
call('[Token]\n'),
call('access_token = aaa\n'),
call('refresh_token = bbb\n'),
]
)
Ref: http://docs.python.org/dev/library/unittest.mock#mock-open