How to parametrize Hypothesis strategy in #given - python

I'm testing a REST API, which can process different requests, e.g., dated and dateless. A request has a field called request_type. I'm wondering what's the best way to write test in hypothesis:
I can write two testes, one for dated, and the other is for dateless.
But for a common property, can I write one test which combine with pytest.mark.parametrize. The problem is how the request strategy uses that parameter req_type in #given.
#pytest.mark.parameterize('req_type', ['dated', 'dateless'])
#given(req=requests())
def test_process_request(req, req_type):
# override the parameter req_type in req with input req_type
pass
Is there a way to parametrize like #given(req=requests(req_type))? Or shall I just generate requests with dated and dateless randomly and pass into the test?

You can't do it all in external decorators, since they're (lazily) evaluated at import time, but you can use the data() strategy to draw a value inside your test function.
#pytest.mark.parametrize('req_type', ['dated', 'dateless'])
#given(data=st.data())
def test_process_request(req_type, data):
req = data.draw(requests(req_type))
...
Alternatively, if you can get the req_type from the req object, it is probably more elegant to generate either type and pass them into the test:
#given(req=st.one_of(requests(dateless), requests(dated)))
def test_process_request(req):
req_type = req.type
...
I'd prefer the latter as it's a bit less magical, but the former does work if you need it.

Related

Mocking elasticsearch-py calls

I'm writing a CLI to interact with elasticsearch using the elasticsearch-py library. I'm trying to mock elasticsearch-py functions in order to test my functions without calling my real cluster.
I read this question and this one but I still don't understand.
main.py
Escli inherits from cliff's App class
class Escli(App):
_es = elasticsearch5.Elasticsearch()
settings.py
from escli.main import Escli
class Settings:
def get(self, sections):
raise NotImplementedError()
class ClusterSettings(Settings):
def get(self, setting, persistency='transient'):
settings = Escli._es.cluster\
.get_settings(include_defaults=True, flat_settings=True)\
.get(persistency)\
.get(setting)
return settings
settings_test.py
import escli.settings
class TestClusterSettings(TestCase):
def setUp(self):
self.patcher = patch('elasticsearch5.Elasticsearch')
self.MockClass = self.patcher.start()
def test_get(self):
# Note this is an empty dict to show my point
# it will contain childs dict to allow my .get(persistency).get(setting)
self.MockClass.return_value.cluster.get_settings.return_value = {}
cluster_settings = escli.settings.ClusterSettings()
ret = cluster_settings.get('cluster.routing.allocation.node_concurrent_recoveries', persistency='transient')
# ret should contain a subset of my dict defined above
I want to have Escli._es.cluster.get_settings() to return what I want (a dict object) in order to not make the real HTTP call, but it keeps doing it.
What I know:
In order to mock an instance method I have to do something like
MagicMockObject.return_value.InstanceMethodName.return_value = ...
I cannot patch Escli._es.cluster.get_settings because Python tries to import Escli as module, which cannot work. So I'm patching the whole lib.
I desperately tried to put some return_value everywhere but I cannot understand why I can't mock that thing properly.
You should be mocking with respect to where you are testing. Based on the example provided, this means that the Escli class you are using in the settings.py module needs to be mocked with respect to settings.py. So, more practically, your patch call would look like this inside setUp instead:
self.patcher = patch('escli.settings.Escli')
With this, you are now mocking what you want in the right place based on how your tests are running.
Furthermore, to add more robustness to your testing, you might want to consider speccing for the Elasticsearch instance you are creating in order to validate that you are in fact calling valid methods that correlate to Elasticsearch. With that in mind, you can do something like this, instead:
self.patcher = patch('escli.settings.Escli', Mock(Elasticsearch))
To read a bit more about what exactly is meant by spec, check the patch section in the documentation.
As a final note, if you are interested in exploring the great world of pytest, there is a pytest-elasticsearch plugin created to assist with this.

python pytest for testing the requests and response

I am a beginner to using pytest in python and trying to write test cases for the following method which get the user address when correct Id is passed else rises custom error BadId.
def get_user_info(id: str, host='127.0.0.1', port=3000 ) -> str:
uri = 'http://{}:{}/users/{}'.format(host,port,id)
result = Requests.get(uri).json()
address = result.get('user',{}).get('address',None)
if address:
return address
else:
raise BadId
Can someone help me with this and also can you suggest me what are the best resources for learning pytest? TIA
Your test regimen might look something like this.
First I suggest creating a fixture to be used in your various method tests. The fixture sets up an instance of your class to be used in your tests rather than creating the instance in the test itself. Keeping tasks separated in this way helps to make your tests both more robust and easier to read.
from my_package import MyClass
import pytest
#pytest.fixture
def a_test_object():
return MyClass()
You can pass the test object to your series of method tests:
def test_something(a_test_object):
# do the test
However if your test object requires some resources during setup (such as a connection, a database, a file, etc etc), you can mock it instead to avoid setting up the resources for the test. See this talk for some helpful info on how to do that.
By the way: if you need to test several different states of the user defined object being created in your fixture, you'll need to parametrize your fixture. This is a bit of a complicated topic, but the documentation explains fixture parametrization very clearly.
The other thing you need to do is make sure any .get calls to Requests are intercepted. This is important because it allows your tests to be run without an internet connection, and ensures they do not fail as a result of a bad connection, which is not the thing you are trying to test.
You can intercept Requests.get by using the monkeypatch feature of pytest. All that is required is to include monkeypatch as an input parameter to the test regimen functions.
You can employ another fixture to accomplish this. It might look like this:
import Requests
import pytest
#pytest.fixture
def patched_requests(monkeypatch):
# store a reference to the old get method
old_get = Requests.get
def mocked_get(uri, *args, **kwargs):
'''A method replacing Requests.get
Returns either a mocked response object (with json method)
or the default response object if the uri doesn't match
one of those that have been supplied.
'''
_, id = uri.split('/users/', 1)
try:
# attempt to get the correct mocked json method
json = dict(
with_address1 = lambda: {'user': {'address': 123}},
with_address2 = lambda: {'user': {'address': 456}},
no_address = lambda: {'user': {}},
no_user = lambda: {},
)[id]
except KeyError:
# fall back to default behavior
obj = old_get(uri, *args, **kwargs)
else:
# create a mocked requests object
mock = type('MockedReq', (), {})()
# assign mocked json to requests.json
mock.json = json
# assign obj to mock
obj = mock
return obj
# finally, patch Requests.get with patched version
monkeypatch.setattr(Requests, 'get', mocked_get)
This looks complicated until you understand what is happening: we have simply made some mocked json objects (represented by dictionaries) with pre-determined user ids and addresses. The patched version of Requests.get simply returns an object- of type MockedReq- with the corresponding mocked .json() method when its id is requested.
Note that Requests will only be patched in tests that actually use the above fixture, e.g.:
def test_something(patched_requests):
# use patched Requests.get
Any test that does not use patched_requests as an input parameter will not use the patched version.
Also note that you could monkeypatch Requests within the test itself, but I suggest doing it separately. If you are using other parts of the requests API, you may need to monkeypatch those as well. Keeping all of this stuff separate is often going to be easier to understand than including it within your test.
Write your various method tests next. You'll need a different test for each aspect of your method. In other words, you will usually write a different test for the instance in which your method succeeds, and another one for testing when it fails.
First we test method success with a couple test cases.
#pytest.mark.parametrize('id, result', [
('with_address1', 123),
('with_address2', 456),
])
def test_get_user_info_success(patched_requests, a_test_object, id, result):
address = a_test_object.get_user_info(id)
assert address == result
Next we can test for raising the BadId exception using the with pytest.raises feature. Note that since an exception is raised, there is not a result input parameter for the test function.
#pytest.mark.parametrize('id', [
'no_address',
'no_user',
])
def test_get_user_info_failure(patched_requests, a_test_object, id):
from my_package import BadId
with pytest.raises(BadId):
address = a_test_object.get_user_info(id)
As posted in my comment, here also are some additional resources to help you learn more about pytest:
link
link
Also be sure to check out Brian Okken's book and Bruno Oliveira's book. They are both very helpful for learning pytest.

How much should a unit test know about the function it is testing?

I'm writing a test for a caching mechanism. The mechanism has two cache layers, the request cache and redis. The request cache uses Flask.g, an object that stores values for the duration of the request. It does this by creating a dictionary, on the Flask.g._cache attribute.
However, I think that the exact attribute is an implementation detail that my unit test shouldn't care about. I want to make sure it stores its values on Flask.g, but I don't care about how it does that. What would be a good way to test this?
I'm using the Python mock module, so I know I can mock out `Flask.g, but I'm not sure if there's a way to test whether there has been any property access on it, without caring about which property it is.
Is this even the right approach for tests like this?
Personally you shouldn't be mocking Flask.g if you are testing endpoints. since you would be creating a self.app (I may be unsure about this portion)
Secondly you will need to mock the redis client with something like this being the returned structure.
class RedisTestCase(object):
saved_dictionary = {}
def keys(self, pattern):
found_keys = []
for key in RedisTestCase.saved_dictionary: #loop keys for pattern
if pattern:
found_keys.append(key)
return found_keys
def delete(self, keys):
for key in keys:
if key in RedisTestCase.saved_dictionary:
del RedisTestCase.saved_dictionary[key]
def mset(self, items):
RedisTestCase.saved_dictionary.update(items)
def pipeline(self):
return RedisTestCase()

Initialize module python

I have a module which wrap an json api for querying song cover/remix data with limits for number of requests per hour/minute. I'd like to keep an optional cache of json responses without forcing users to adjust a cache/context parameter every time. What is a good way of initializing a library/module in python? Or would you recommend I just do the explicit thing and use a cache named parameter in every call that eventually request json data?
I was thinking of doing
_cache = None
class LFU(object):
...
NO_CACHE, LFU = "NO_CACHE", "LFU"
def set_cache_strategy(strategy):
if _cache == NO_CACHE:
_cache = None
else:
_cache = LFU()
import second_hand_songs_wrapper as s
s.set_cache_strategy(s.LFU)
l1 = s.ShsLabel.get_from_resource_id(123)
l2 = s.ShsLabel.get_from_resource_id(123,use_cache=Fale)
edit:
I'm probably only planning on having two strategies one with/ one without a cache.
Other possible alternative initialization schemes I can think of of the top of my head include using enviromental variables, initializing _cache by hand in the user code to None/LFU(), and using explicit cache everywhere(possibly defaulting to having a cache).
Note the reason I don't set cache on an instance of the class is that I currently use a never instantiated class(use class functions + class state as a singleton) to abstract downloading the json data along with some convenience/methods to download certain urls. I could instantiate the downloader class but then I'd have to pass the instance explicitly to each function, or use another global variable for a convenience version of the class. The downloader class also keeps track of # of requests(website has limit per minute/hour) so having multiple downloader objects would cause more trouble.
There's nothing wrong in setting a default, even if that default is None. I would note though that having the pseudo-constants as well as a conditional (provided that's all you use the values for) is redundant. Try:
caching_strategies = {'NO_CACHE' : lambda: None,
'LFU' : LFU}
_cache = caching_strategies['NO_CACHE']
def set_cache_strategy(strategy):
_cache = caching_methods[strategy]()
If you want to provide a convenience method for the available strategies, just wrap caching_strategies.keys(). Really though, as far as your strategies go, you should probably have all your strategies inherit from some base strategy, and just create a no_cache strategy class that inherits from that and stubs all the methods for your standardized caching inteface.

Proper way to test Django signals

I'm trying to test sent signal and it's providing_args. Signal triggered inside contact_question_create view just after form submission.
My TestCase is something like:
def test_form_should_post_proper_data_via_signal(self):
form_data = {'name': 'Jan Nowak'}
signals.question_posted.send(sender='test', form_data=form_data)
#receiver(signals.question_posted, sender='test')
def question_posted_listener(sender, form_data):
self.name = form_data['name']
eq_(self.name, 'Jan Nowak')
Is this the proper way to test this signal? Any better ideas?
Simplest way to do what you asked in 2015:
from unittest.mock import patch
#patch('full.path.to.signals.question_posted.send')
def test_question_posted_signal_triggered(self, mock):
form = YourForm()
form.cleaned_data = {'name': 'Jan Nowak'}
form.save()
# Check that your signal was called.
self.assertTrue(mock.called)
# Check that your signal was called only once.
self.assertEqual(mock.call_count, 1)
# Do whatever else, like actually checking if your signal logic did well.
And with that, you just tested that your signal was properly triggered.
I have an alternative suggestion using the mock library, which is now part of the unittest.mock standard library in Python 3 (if you're using Python 2, you'll have to pip install mock).
try:
from unittest.mock import MagicMock
except ImportError:
from mock import MagicMock
def test_form_should_post_proper_data_via_signal(self):
"""
Assert signal is sent with proper arguments
"""
# Create handler
handler = MagicMock()
signals.question_posted.connect(handler, sender='test')
# Post the form or do what it takes to send the signal
signals.question_posted.send(sender='test', form_data=form_data)
# Assert the signal was called only once with the args
handler.assert_called_once_with(signal=signals.question_posted, form_data=form_data, sender="test")
The essential part of the suggestion is to mock a receiver, then test whether or not your signal is being sent to that receiver, and called only once. This is great, especially if you have custom signals, or you've written methods that send signals and you want to ensure in your unit tests that they are being sent.
I've resolved the problem by myself. I think that the best solution is following:
def test_form_should_post_proper_data_via_signal(self):
# define the local listener
def question_posted_listener(sender, form_data, **kwargs):
self.name = form_data['name']
# prepare fake data
form_data = {'name': 'Jan Nowak'}
# connect & send the signal
signals.question_posted.connect(question_posted_listener, sender='test')
signals.question_posted.send(sender='test', form_data=form_data)
# check results
eq_(self.name, 'Jan Nowak')
The purpose of this isn't to test the underlying signalling mechanism, but rather is an important unit test to ensure that whatever signal your method is supposed to emit is actually emitted with the proper arguments. In this case, it seems a little trivial since its an internal django signal, but imagine if you wrote the method that was emitting a custom signal.
You need to:
assert a signal was emited with proper arguments and,
a specific number of times and,
in appropriate order.
You can use mock_django app which provides a mock for signals.
Example:
from mock import call
def test_install_dependency(self):
with mock_signal_receiver(post_app_install) as install_receiver:
self.env.install(self.music_app)
self.assertEqual(install_receiver.call_args_list, [
call(signal=post_app_install, sender=self.env,
app=self.ukulele_app),
call(signal=post_app_install, sender=self.env,
app=self.music_app),
])
Why do you test your framework? Django already have unit tests for signal dispatcher. If you don't believe that your framework is fine just attach it unit tests to yours test runner.
For my part, I wouldn't test that the signal is sent. I would test the intended effect of the signals processing.
In my use case, the signals are used to update a Produit.qte attribute when, say, Order.qte_shipped is upated. (E.g. when we fill an order, I want the qte of the given product to be subtracted from the corresponding product for that order).
Thus I do something like this in signals.py:
#receiver(post_save, sender='orders.Order')
#disable_for_loaddata
def quantity_adjust_order(sender, **kwargs):
# retrieve the corresponding product for that order
# subtract Order.qte_shipped from Produit.qte
# save the updated Produit
What I actually test is that Produit.qte is correctly updated when I ship an Order. I do not test that the signals works; that's just one of the things that COULD explain why test_order_ship_updates_product() failed.
I somewhat agree with what #Piotr Czapla said; you're kind of trying to test the framework. Test the effect on your code instead.

Categories