How to parametrize prepared data - python

I need to prepare an entity with suitable child entities. I need to specify the type of entities that will be stored in the prepared parent entity. Like this:
#pytest.fixture(element_types)
def entry_type_id(element_types)
elements = [resolve_elements_create(data=element_data(element_type)) for element_type in element_types]
entry_type_id = resolve_entry_type_create(elements)
return entry_type_id
def test_something(entry_type_id([ElementType1, ElementType2])):
...
I can't create one fixture for each use case, because there are so many combinations I need. Is there any way I can pass parameters to the ficture to customize the prepared entity?

I don't fully understand what you end point is but according to your comment I think you should create a test class so you can create the elements and then delete them, since you want to test the creation + deletion of the entries
#pytest.fixture(scope="class")
def entry_type(request)
element = resolve_elements_create(data=element_data(request.param))
# This should return 0 if Error during creation
return resolve_entry_type_create(element)
following by the test it self
#pytest.mark.parametrize("entry_type", [ElementType1, ElementType2], indirect=True)
class TestEntries:
def test_create_entry(entry_type):
assert entry_type
def test_delete_entry(entry_type):
assert delete_entry(entry_type)
This is more of a psuedo code it will need some changes, but in most cases the use of fixtures should be prefered over functions in pytest.

Related

Correct use of pytest fixtures of objects with Django

I am relatively new to pytest, so I understand the simple use of fixtures that looks like that:
#pytest.fixture
def example_data():
return "abc"
and then using it in a way like this:
def test_data(self, example_data):
assert example_data == "abc"
I am working on a django app and where it gets confusing is when I try to use fixtures to create django objects that will be used for the tests.
The closest solution that I've found online looks like that:
#pytest.fixture
def test_data(self):
users = get_user_model()
client = users.objects.get_or_create(username="test_user", password="password")
and then I am expecting to be able to access this user object in a test function:
#pytest.mark.django_db
#pytest.mark.usefixtures("test_data")
async def test_get_users(self):
# the user object should be included in this queryset
all_users = await sync_to_async(User.objects.all)()
.... (doing assertions) ...
The issue is that when I try to list all the users I can't find the one that was created as part of the test_data fixture and therefore can't use it for testing.
I noticed that if I create the objects inside the function then there is no problem, but this approach won't work for me because I need to parametrize the function and depending on the input add different groups to each user.
I also tried some type of init or setup function for my testing class and creating the User test objects from there but this doesn't seem to be pytest's recommended way of doing things. But either way that approach didn't work either when it comes to listing them later.
Is there any way to create test objects which will be accessible when doing a queryset?
Is the right way to manually create separate functions and objects for each test case or is there a pytest-way of achieving that?

pytest mark django db: avoid to save fixture in tests

I need to:
Avoid to use save() in tests
Use #pytest.mark.django_db on all tests inside this class
Create a number of trx fixtures (10/20) to act like false data.
import pytest
from ngg.processing import (
elab_data
)
class TestProcessing:
#pytest.mark.django_db
def test_elab_data(self, plan,
obp,
customer,
bac,
col,
trx_1,
trx_2,
...):
plan.save()
obp.save()
customer.save()
bac.save()
col.save()
trx.save()
elab_data(bac, col)
Where the fixtures are simply models like that:
#pytest.fixture
def plan():
plan = Plan(
name = 'test_plan',
status = '1'
)
return plan
I don't find this way really clean. How would you do that?
TL;DR
test.py
import pytest
from ngg.processing import elab_data
#pytest.mark.django_db
class TestProcessing:
def test_elab_data(self, plan, obp, customer, bac, col, trx_1, trx_2):
elab_data(bac, col)
conftest.py
#pytest.fixture(params=[
('test_plan', 1),
('test_plan2', 2),
])
def plan(request, db):
name, status = request.param
return Plan.objects.create(name=name, status=status)
I'm not quite sure if I got it correctly
Avoid to use save() in tests
You may create objects using instance = Model.objects.create() or just put instance.save() in fixtures.
As described at note section here
To access the database in a fixture, it is recommended that the fixture explicitly request one of the db, transactional_db or django_db_reset_sequences fixtures.
and at fixture section here
This fixture will ensure the Django database is set up. Only required for fixtures that want to use the database themselves. A test function should normally use the pytest.mark.django_db() mark to signal it needs the database.
you may want to use db fixture in you records fixtures and keep django_db mark in your test cases.
Use #pytest.mark.django_db on all tests inside this class
To mark whole classes you may use decorator on classes or pytestmark variable as described here.
You may use pytest.mark decorators with classes to apply markers to all of its test methods
To apply marks at the module level, use the pytestmark global variable
Create a number of trx fixtures (10/20) to act like false data.
I didn't quite get what you were trying to do but still would assume that it is one of the following things:
Create multiple objects and pass them as fixtures
In that case you may want to create a fixture that will return generator or list and use whole list instead of multiple fixtures
Test case using different variants of fixture but with one/few at a time
In that case you may want to parametrize your fixture so it will return different objects and test case will run multiple times - one time per one variant

Mocking elasticsearch-py calls

I'm writing a CLI to interact with elasticsearch using the elasticsearch-py library. I'm trying to mock elasticsearch-py functions in order to test my functions without calling my real cluster.
I read this question and this one but I still don't understand.
main.py
Escli inherits from cliff's App class
class Escli(App):
_es = elasticsearch5.Elasticsearch()
settings.py
from escli.main import Escli
class Settings:
def get(self, sections):
raise NotImplementedError()
class ClusterSettings(Settings):
def get(self, setting, persistency='transient'):
settings = Escli._es.cluster\
.get_settings(include_defaults=True, flat_settings=True)\
.get(persistency)\
.get(setting)
return settings
settings_test.py
import escli.settings
class TestClusterSettings(TestCase):
def setUp(self):
self.patcher = patch('elasticsearch5.Elasticsearch')
self.MockClass = self.patcher.start()
def test_get(self):
# Note this is an empty dict to show my point
# it will contain childs dict to allow my .get(persistency).get(setting)
self.MockClass.return_value.cluster.get_settings.return_value = {}
cluster_settings = escli.settings.ClusterSettings()
ret = cluster_settings.get('cluster.routing.allocation.node_concurrent_recoveries', persistency='transient')
# ret should contain a subset of my dict defined above
I want to have Escli._es.cluster.get_settings() to return what I want (a dict object) in order to not make the real HTTP call, but it keeps doing it.
What I know:
In order to mock an instance method I have to do something like
MagicMockObject.return_value.InstanceMethodName.return_value = ...
I cannot patch Escli._es.cluster.get_settings because Python tries to import Escli as module, which cannot work. So I'm patching the whole lib.
I desperately tried to put some return_value everywhere but I cannot understand why I can't mock that thing properly.
You should be mocking with respect to where you are testing. Based on the example provided, this means that the Escli class you are using in the settings.py module needs to be mocked with respect to settings.py. So, more practically, your patch call would look like this inside setUp instead:
self.patcher = patch('escli.settings.Escli')
With this, you are now mocking what you want in the right place based on how your tests are running.
Furthermore, to add more robustness to your testing, you might want to consider speccing for the Elasticsearch instance you are creating in order to validate that you are in fact calling valid methods that correlate to Elasticsearch. With that in mind, you can do something like this, instead:
self.patcher = patch('escli.settings.Escli', Mock(Elasticsearch))
To read a bit more about what exactly is meant by spec, check the patch section in the documentation.
As a final note, if you are interested in exploring the great world of pytest, there is a pytest-elasticsearch plugin created to assist with this.

Meaning of the map function in couchdb-pythons ViewField

I'm using the couchdb.mapping in one of my projects. I have a class called SupportCase derived from Document that contains all the fields I want.
My database (called admin) contains multiple document types. I have a type field in all the documents which I use to distinguish between them. I have many documents of type "case" which I want to get at using a view. I have design document called support with a view inside it called cases. If I request the results of this view using db.view("support/cases), I get back a list of Rows which have what I want.
However, I want to somehow have this wrapped by the SupportCase class so that I can call a single function and get back a list of all the SupportCases in the system. I created a ViewField property
#ViewField.define('cases')
def all(self, doc):
if doc.get("type","") == "case":
yield doc["_id"], doc
Now, if I call SupportCase.all(db), I get back all the cases.
What I don't understand is whether this view is precomputed and stored in the database or done on demand similar to db.query. If it's the latter, it's going to be slow and I want to use a precomputed view. How do I do that?
I think what you need is:
#classmethod
def all(cls):
result = cls.view(db, "support/all", include_docs=True)
return result.rows
Document class has a classmethod view which wraps the rows by class on which it is called. So the following returns you a ViewResult with rows of type SupportCase and taking .rows of that gives a list of support cases.
SupportCase.view(db, viewname, include_docs=True)
And I don't think you need to get into the ViewField magic. But let me explain how it works. Consider the following example from the CouchDB-python documentation.
class Person(Document):
#ViewField.define('people')
def by_name(doc):
yield doc['name'], doc
I think this is equivalent to:
class Person(Document):
#classmethod
def by_name(cls, db, **kw):
return cls.view(db, **kw)
With the original function attached to People.by_name.map_fun.
The map function is in some ways analogous to an index in a relational database. It is not done again every time, and when new documents are added the way it is updated does not require everything to be redone (it's a kind of tree structure).
This has a pretty good summary
ViewField uses a pre-defined view so, once built, will be fast. It definitely doesn't use a temporary view.

Get existing or create new App Engine Syntax

I came across this syntax browsing through code for examples. From its surrounding code, it looked like would a) get the entity with the given keyname or b) if the entity did not exist, create a new entity that could be saved. Assume my model class is called MyModel.
my_model = MyModel(key_name='mymodelkeyname',
kwarg1='first arg', kwarg2='second arg')
I'm now running into issues, but only in certain situations. Is my assumption about what this snippet does correct? Or should I always do the following?
my_model = MyModel.get_by_key_name('mymodelkeyname')
if not my_model:
my_model = MyModel(key_name='mymodelkeyname',
kwarg1='first arg', kwarg2='second arg')
else:
# do something with my_model
The constructor, which is what you're using, always constructs a new entity. When you store it, it overwrites any other entity with the same key.
The alternate code you propose also has an issue: it's susceptible to race conditions. Two instances of that code running simultaneously could both determine that the entity does not exist, and each create it, resulting in one overwriting the work of the other.
What you want is the Model.get_or_insert method, which is syntactic sugar for this:
def get_or_insert(cls, key_name, **kwargs):
def _tx():
model = cls.get_by_key_name(key_name)
if not model:
model = cls(key_name=key_name, **kwargs)
model.put()
return model
return db.run_in_transaction(_tx)
Because the get operation and the conditional insert take place in a transaction, the race condition is not possible.
Is this what you are looking for -> http://code.google.com/appengine/docs/python/datastore/modelclass.html#Model_get_or_insert

Categories