I'm not sure if this is an IntelliJ thing or not (using the built-in test runner) but I have a class whose logging output I'd like to appear in the test case that I am running. I hope the example code is enough scope, if not I can edit to include more.
Basically the log.info() call in the Matching() class never shows up in my test runner console when running. Is there something I need to configure on the class that extends TestCase ?
Here's the class in matching.py:
class Matching(object):
"""
The main compliance matching logic.
"""
request_data = None
def __init__(self, matching_request):
"""
Set matching request information.
"""
self.request_data = matching_request
def can_matching_run(self):
raise Exception("Not implemented yet.")
def run_matching(self):
log.info("Matching started at {0}".format(datetime.now()))
Here is the test:
class MatchingServiceTest(IntegrationTestBase):
def __do_matching(self, client_name, date_range):
"""
Pull control records from control table, and compare against program generated
matching data from teh non-control table.
The ``client_name`` dictates which model to use. Data is compared within
a mock ``date_range``.
"""
from matching import Matching, MatchingRequest
# Run the actual matching service for client.
match_request = MatchingRequest(client_name, date_range)
matcher = Matching(match_request)
matcher.run_matching()
Well I do not see where you initialize the log object but I presume you do that somewhere and you add a Handler to it (StreamHandler, FileHandler etc.)
This means that during your tests this does not occur. So you would have to that in test. Since you did not post that part of the code, I can't give an exact solution:
import logging
log = logging.getLogger("your-logger-name")
log.addHandler(logging.StreamHandler())
log.setLevel(logging.DEBUG)
Although test should generally not have anything printed out to stdout. It's best to use a FileHandler, and you should design your tests in such a way that they will fail if something goes wrong. That's the whole point of automated tests. So you won't have to manually inspect the output. If they fail, you can then check the log to see if they contain useful debugging information.
Hope this helps.
Read more on logging here.
Related
According to "https://docs.pytest.org/en/stable/getting-started.html" in Pytest when grouping tests inside classes is that each test has a unique instance of the class. Having each test share the same class instance would be very detrimental to test isolation and would promote poor test practices.What does that mean ? This is outlined below:
content of test_class_demo.
class TestClassDemoInstance:
def test_one(self):
assert 0
def test_two(self):
assert 0
Imagine you are testing a user account system, where you can create users and change passwords. You need to have a user before you can change its password, and you don't want to repeat yourself, so you could naively structure the test like this:
class TestUserService:
def test_create_user(self):
# Store user_id on self to reuse it in subsequent tests.
self.user_id = UserService.create_user("timmy")
assert UserService.get_user_id("timmy") == self.user_id
def test_set_password(self):
# Set the password of the user we created in the previous test.
UserService.set_password(self.user_id, "hunter2")
assert UserService.is_password_valid(self.user_id, "hunter2")
But by using self to pass data from one test case to the next, you have created several problems:
The tests must be run in this order. First test_create_user, then test_set_password.
All tests must be run. You can't re-run just test_set_password independently.
All previous tests must pass. If test_create_user fails, we can no longer meaningfully run test_set_password.
Tests can no longer be run in parallel.
So to prevent this kind of design, pytest wisely decided that each test gets a brand new instance of the test class.
Just to add to Thomas answer, please be aware that currently there is an error in the documentation you mentioned. As described in this open issue on pytest Github, in what is shown below the explanation, both tests are demonstrated to share the same instance of the class, at 0xdeadbeef. This seems to demonstrate the opposite of what is stated (that the instances are not the same).
I'm writing a CLI to interact with elasticsearch using the elasticsearch-py library. I'm trying to mock elasticsearch-py functions in order to test my functions without calling my real cluster.
I read this question and this one but I still don't understand.
main.py
Escli inherits from cliff's App class
class Escli(App):
_es = elasticsearch5.Elasticsearch()
settings.py
from escli.main import Escli
class Settings:
def get(self, sections):
raise NotImplementedError()
class ClusterSettings(Settings):
def get(self, setting, persistency='transient'):
settings = Escli._es.cluster\
.get_settings(include_defaults=True, flat_settings=True)\
.get(persistency)\
.get(setting)
return settings
settings_test.py
import escli.settings
class TestClusterSettings(TestCase):
def setUp(self):
self.patcher = patch('elasticsearch5.Elasticsearch')
self.MockClass = self.patcher.start()
def test_get(self):
# Note this is an empty dict to show my point
# it will contain childs dict to allow my .get(persistency).get(setting)
self.MockClass.return_value.cluster.get_settings.return_value = {}
cluster_settings = escli.settings.ClusterSettings()
ret = cluster_settings.get('cluster.routing.allocation.node_concurrent_recoveries', persistency='transient')
# ret should contain a subset of my dict defined above
I want to have Escli._es.cluster.get_settings() to return what I want (a dict object) in order to not make the real HTTP call, but it keeps doing it.
What I know:
In order to mock an instance method I have to do something like
MagicMockObject.return_value.InstanceMethodName.return_value = ...
I cannot patch Escli._es.cluster.get_settings because Python tries to import Escli as module, which cannot work. So I'm patching the whole lib.
I desperately tried to put some return_value everywhere but I cannot understand why I can't mock that thing properly.
You should be mocking with respect to where you are testing. Based on the example provided, this means that the Escli class you are using in the settings.py module needs to be mocked with respect to settings.py. So, more practically, your patch call would look like this inside setUp instead:
self.patcher = patch('escli.settings.Escli')
With this, you are now mocking what you want in the right place based on how your tests are running.
Furthermore, to add more robustness to your testing, you might want to consider speccing for the Elasticsearch instance you are creating in order to validate that you are in fact calling valid methods that correlate to Elasticsearch. With that in mind, you can do something like this, instead:
self.patcher = patch('escli.settings.Escli', Mock(Elasticsearch))
To read a bit more about what exactly is meant by spec, check the patch section in the documentation.
As a final note, if you are interested in exploring the great world of pytest, there is a pytest-elasticsearch plugin created to assist with this.
I am using cherrypy as a web server, and I want to check a user's logged-in status before returning the page. This works on methods in the main Application class (in site.py) but gives an error when I call the same decorated function on method in a class that is one layer deeper in the webpage tree (in a separate file).
validate_user() is the function used as a decorator. It either passes a user to the page or sends them to a 401 restricted page, as a cherrypy.Tool, like this:
from user import validate_user
cherrypy.tools.validate_user = cherrypy.Tool('before_handler', validate_user)
I attach different sections of the site to the main site.py file's Application class by assigning instances of the sub-classes as variables accordingly:
from user import UserAuthentication
class Root:
user = UserAuthentication() # maps user/login, user/register, user/logout, etc
admin = Admin()
api = Api()
#cherrypy.expose
#cherrypy.tools.validate_user()
def how_to(self, **kw):
from other_stuff import how_to_page
return how_to_page(kw)
This, however, does not work when I try to use the validate_user() inside the Admin or Api or Analysis sections. These are in separate files.
import cherrypy
class Analyze:
#cherrypy.expose
#cherrypy.tools.validate_user() #### THIS LINE GIVES ERROR ####
def explore(self, *args, **kw): # #addkw(fetch=['uid'])
import explore
kw['uid'] = cherrypy.session.get('uid',-1)
return explore.explorer(args, kw)
The error is that cherrypy.tools doesn't have a validate_user function or method. But other things I assign in site.py do appear in cherrypy here. What's the reason why I can't use this tool in a separate file that is part of my overall site map?
If this is relevant, the validate_user() function simply looks at the cherrypy.request.cookie, finds the 'session_token' value, and compares it to our database and passes it along if the ID matches.
Sorry I don't know if the Analyze() and Api() and User() pages are subclasses, or nested classes, or extended methods, or what. So I can't give this a precise title. Do I need to pass in the parent class to them somehow?
The issue here is that Python processes everything except the function/method bodies during import. So in site.py, when you import user (or from user import <anything>), that causes all of the user module to be processed before the Python interpreter has gotten to the definition of the validate_user tool, including the decorator, which is attempting to access that tool by value (rather than by a reference).
CherryPy has another mechanism for decorating functions with config that will enable tools on those handlers. Instead of #cherrypy.tools.validate_user, use:
#cherrypy.config(**{"tools.validate_user.on": True})
This decorator works because instead of needing to access validate_user from cherrypy.tools to install itself on the handler, it instead configures CherryPy to install that tool on the handler later, when the handler is invoked.
If that tool is needed for all methods on that class, you can use that config decorator on the class itself.
You could alternatively, enable that tool for given endpoints in the server config, as mentioned in the other question.
I have an issue with a my unit tests and the way django manages transactions.
In my code I have a function:
def send():
autocommit = transaction.set_autocommit(False)
try:
# stuff
finally:
transaction.rollback()
transaction.set_autocommit(autocommit)
In my test I have:
class MyTest(TransactionTestCase):
def test_send(self):
send()
The issue I am having is that my test_send passes successfully but not 80% of my other tests.
It seems the transaction of the other tests are failing
btw I am using py.test to run my tests
EDIT:
To make things more clear when I run my tests with only
myapp.test.test_module.py it runs fine and all 3 tests passes but when I run all my tests most of the fails, will try to produce a test app
Also all my tests passes with the default test runner from django
EDIT2:
Here is A minimal example to test this issue:
class ManagementTestCase(TransactionTestCase):
def test_transfer_ubl(self, MockExact):
pass
class TestTestCase(TestCase):
def test_1_user(self):
get_user_model().objects.get(username="admin")
self.assertEqual(get_user_model().objects.all().count(), 1)
Bear in mind there is a datamigration that adds an "admin" user (the TestTestCase succeeds alone but not when the ManagmentTestCase is run before)
It seems autocommit has nothing to do with it.
The TestCase class wraps the tests inside two atomic blocks. Therefore it is not possible to use transaction.set_autocommit() or transaction.rollback() if you are inheriting from TestCase.
As the docs say, you should use TransactionTestCase if you are testing specific database transaction behaviour.
having autocommit = transaction.set_autocommit(False) inside the send function feels wrong. Disabling the transaction is done here presumably for testing purposes, but the rule of thumb is to keep your test logic outside your code.
As #Alasdair has pointed out, django docs states "Django’s TestCase class also wraps each test in a transaction for performance reasons."
It is not clear from your question whether you're testing specific database transaction logic or not, if that is the case then #Alasdair's answer of using the TransactionTestCase is the way to go.
Otherwise, removing the transaction context switch from around the stuff inside your send function should help.
Since you mentioned pytest as your test runner, I would also recommend making use of pytest. Pytest-django plugin comes with nice features such selectively setting some of your tests to require transactions, using markers.
pytest.mark.django_db(transaction=False)
If installing a plugin is too much, then you could roll your own transaction manage fixture. Like
#pytest.fixture
def no_transaction(request):
autocommit = transaction.set_autocommit(False)
def rollback():
transaction.rollback()
transaction.set_autocommit(True)
request.addfinalizer(rollback)
Your test_send will then require the no_transaction fixture.
def test_send(no_transaction):
send()
For those who still looking for a solution, serialized_rollback option is a way to go:
class ManagementTestCase(TransactionTestCase):
serialized_rollback = True
def test_transfer_ubl(self, MockExact):
pass
class TestTestCase(TestCase):
def test_1_user(self):
get_user_model().objects.get(username="admin")
self.assertEqual(get_user_model().objects.all().count(), 1)
from the docs
Django can reload that data for you on a per-testcase basis by setting the serialized_rollback option to True in the body of the TestCase or TransactionTestCase, but note that this will slow down that test suite by approximately 3x.
Unfortunately, pytest-django still missing this feature.
I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start.
The app has two views (a list view, and a detail view) and a model with four fields:
class News(models.Model):
title = models.CharField(max_length=250)
content = models.TextField()
pub_date = models.DateTimeField(default=datetime.datetime.now)
slug = models.SlugField(unique=True)
I would like to show you my tests.py file and ask:
Does it make sense?
Am I even testing for the right things?
Are there best practices I'm not following, and you could point me to?
my tests.py (it contains 11 tests):
# -*- coding: utf-8 -*-
from django.test import TestCase
from django.test.client import Client
from django.core.urlresolvers import reverse
import datetime
from someproject.myapp.models import News
class viewTest(TestCase):
def setUp(self):
self.test_title = u'Test title: bąrekść'
self.test_content = u'This is a content 156'
self.test_slug = u'test-title-bareksc'
self.test_pub_date = datetime.datetime.today()
self.test_item = News.objects.create(
title=self.test_title,
content=self.test_content,
slug=self.test_slug,
pub_date=self.test_pub_date,
)
client = Client()
self.response_detail = client.get(self.test_item.get_absolute_url())
self.response_index = client.get(reverse('the-list-view'))
def test_detail_status_code(self):
"""
HTTP status code for the detail view
"""
self.failUnlessEqual(self.response_detail.status_code, 200)
def test_list_status_code(self):
"""
HTTP status code for the list view
"""
self.failUnlessEqual(self.response_index.status_code, 200)
def test_list_numer_of_items(self):
self.failUnlessEqual(len(self.response_index.context['object_list']), 1)
def test_detail_title(self):
self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title)
def test_list_title(self):
self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title)
def test_detail_content(self):
self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content)
def test_list_content(self):
self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content)
def test_detail_slug(self):
self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug)
def test_list_slug(self):
self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug)
def test_detail_template(self):
self.assertContains(self.response_detail, self.test_title)
self.assertContains(self.response_detail, self.test_content)
def test_list_template(self):
self.assertContains(self.response_index, self.test_title)
I am not perfect in testing but a few thoughts:
Basically you should test every function, method, class, whatever, that you have written by yourself.
This implies that you don't have to test functions, classes, etc. which the framework provides.
That said, a quick check of your test functions:
test_detail_status_code and test_list_status_code:
Ok to check whether you have configured the routing properly or not. Even more important when you provide your own implementation of get_absolute_url().
test_list_numer_of_items:
Ok if a certain number of items should be returned by the view. Not necessary if the number is not important (i.e. arbitrary).
test_detail_template and test_list_template:
Ok to check whether template variables are correctly set.
All the other functions: Not necessary.
What your are basically testing here is whether the ORM worked properly, whether lists work as expected and whether object properties can be accessed (or not). As long as you don't change e.g. the save() method of a model and/or provide your custom logic, I would not test this. You should trust the framework developers that this works properly.
You only should have to test what you have (over)written.
The model classes are maybe a special case. You basically have to test them, as I said, if you provide custom logic. But you should also test them against your requirements. E.g. it could be that a field is not allowed to be null (or that it has to be a certain datatype, like integer). So you should test that storing an object fails, if it has a null value in this field.
This does not test the ORM for correctly following your specification but test that the specification still fulfills your requirements. It might be that you change the model and change some settings (by chance or because you forgot about the requirements).
But you don't have to test e.g. methods like save() or wether you can access a property.
Of course when you use buggy third party code... well things can be different. But as Django uses the test framework itself to verify that everything is working, I would assume it is working.
To sum up:
Test against your requirements, test your own code.
This is only my point of view. Maybe others have better proposals.
Break your tests into two completely separate kinds.
Model tests. Put these in your models.py file with your model. These tests will exercise the methods in your model classes. You can do simple CRUD (Create, Retrieve, Update, Delete) to simply prove that your model works. Don't test every attribute. Do test field defaults and save() rules if you're curious.
For your example, create a TestNews class that creates, gets, updates and deletes a News item. Be sure to test the default date results. This class should be short and to the point. You can, if your application requires it, test various kinds of filter processing. Your unit test code can (and should) provide examples of the "right" way to filter News.
UI Tests. Put these in a separate tests.py file. These tests will test the view functions and templates.
Name the TestCase with the "condition" you're creating. "TestNotLoggedIn". "TestLoggedIn". "TestNoValidThis". "TestNotAllowedToDoThat". Your setUp will do the login and any other steps required to establish the required condition.
Name each test method with the action and result. "test_get_noquery_should_list", "test_post_should_validate_with_errors", "test_get_query_should_detail".