Is there a way to show the errors that occurred on the server during a StaticLiveServerTestCase directly in the test feedback? That being, when some server function call errors and page just doesn't show up, the test execution by default has no knowledge of the server error. Is there someway to pass that output onto the testing thread?
Preferably the these errors would show up in the same place that errors directly in the test code execution show up. If this isn't (easily) possible though, what's the next best way to quickly see those server errors?
Thanks!
Code (as requested):
class TestFunctionalVisitor(StaticLiveServerTestCase):
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_visitor(self):
self.browser.get(self.live_server_url)
self.assertEqual(self.browser.title, "Something")
...
class Home(TemplateView):
template_name = 'home.html'
def get_context_data(self):
context = {}
MyModel = None
context['my_models'] = MyModel.objects.all()
return context
This has been significantly altered to make it simple and short. But when MyModel is None and tries to call objects.all() the server has a server 500 error, but all I get is the "Something" not in self.browser.title error from the test output, when I'd like to see the NoneType has no... error in the test output.
To see the errors immediately, run the test in DEBUG mode:
from django.test.utils import override_settings
#override_settings(DEBUG=True)
class DjkSampleTestCase(StaticLiveServerTestCase):
# fixtures = ['club_app_phase01_2017-01-09_13-30-19-169537.json']
reset_sequences = True
But one should also configure logging of server-side errors, either via custom django.core.handlers.base.BaseHandler class handle_uncaught_exception() method implementation or via sentry.
I use to override the default logger using:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s -%(filename)s:%(lineno)d - %(message)s')
This will display stderr on your terminal. You can even do:
logging.debug('My var %s', var)
I only do this for debugging, if you want to use logging for non-debugging things I'd suggest to create custom loggers.
More details about logging:
https://docs.djangoproject.com/en/1.10/topics/logging/
https://docs.python.org/3/library/logging.html
This is exactly why it is recommended to have much more unit tests than integration and UI/end-to-end. Aside from other things, the latter don't provide you with a specific feedback and you often need more time debugging and investigating why a UI test failed. On the other hand, when a unit test fails, it is usually a failure or an exception pointing you to a specific line in the code - you get the "What went wrong" answer right away.
In other words, the point is: Cover this particular problem with unit tests leaving your UI test as is.
To help you gather more information about the "Something" not in self.browser.title failure, turn the logging on and log as much details as possible. You may also use the built-in Error Reporting and, for instance, let Django send you an email on 500 error. In other words, collect all the details and troubleshoot the failure manually.
Related
One advantage of using logging in Python instead of print is that you can set the level of the logging. When debugging you could set the level to DEBUG and everything below will get printed. If you set the level to ERROR then only error messages will get printed.
In a high-performance application this property is desirable. You want to be able to print some logging information during development/testing/debugging but not when you run it in production.
I want to ask if logging will be an efficient way to suppress debug and info logging when you set the level to ERROR. In other words, would doing the following:
logging.basicConfig(level=logging.ERROR)
logging.debug('something')
will be as efficient as
if not in_debug:
print('...')
Obviously the second code will not cost because checking a boolean is fast and when not in debug mode the code will be faster because it will not print unnecessary stuff. It comes to the cost of having all those if statement though. If logging delivers same performance without the if statements is of course much more desirable.
There's no need for you to check is_debug in your source code, because the Logging module already does that.
Here's an excerpt, with some comments and whitespace removed:
class Logger(Filterer):
def debug(self, msg, *args, **kwargs):
if self.isEnabledFor(DEBUG):
self._log(DEBUG, msg, args, **kwargs)
Just make sure you follow pylint guidelines on how to pass parameters to the logging function, so they don't execute stuff before calling the logging code. See PyLint message: logging-format-interpolation
I first learned about it through pylint warnings, but here's the official documentation that says to use % formatting and pass arguments to be evaluated later: https://docs.python.org/3/howto/logging.html#optimization
I'm writing some code in python to try to create a simple socket object oriented server-client connection. I would like to maintain my code clean hence I would prefer to exclude excessive "if" statement. So my class definition for the server socket look like this:
class Server(object):
def __init__(server_ip=LOCALHOST, server_port=55555, debug=True):
# other code
pass
so if there is some kind of passage when it would look useful to have some sort of debugging I have included an if statement like the following:
if debug:
print("Something that helps the code debugging")
Therefore I have wondered if there was a way to exclude this types of code chunks from the main class and add them to a wrapper function which defines a decorator. Is that possible? And if so, how can I implement this feature?
Thank you very much for your time and also, excuse my english, I am still practising it!
It's up to you, but actually for logging logic it's very convenient to use built-in logging module, which has a clear setup pattern:
import logging
logger = logging.getLogger(__name__)
log_handler = logging.streamHandler() # to output logs into stderr by default
logger.addHandler(log_handler)
logger.setLevel(logging.DEBUG) # here you set log level, you need to parametrize it (or dive deeper into logging docs)
def your_foo():
logger.debug('Debug log message') # will only fire if log level == DEBUG
if you still wanna stick to decorator, I can suggest this talk (part on decorators): James Powell: So you want to be a Python expert? | PyData Seattle 2017
Ok so in my environment.py file I am able to log stuff by:
logging.basicConfig(level=logging.DEBUG, filename="example.log")
def before_feature(context, feature):
logging.info("test logging")
but when I am inside the steps file I cannot perform logging:
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
#given("we have a step")
def step_impl(context):
logger.debug("Test logging 2")
The logging message inside the step does not show up. I am using the python behave module. Any ideas?
I have tried enabling and disabling logcapture when I run behave but it makes no difference.
By default, behave tends to capture logs during feature execution, and only display them in cases of failure.
To disable this, you can set
log_capture=false
in behave.ini
Or, you can use the --no-logcapture command line option
Further Reading : Behave API Reference, Behave LogCapture
what worked for me:
behave --no-capture --no-capture-stderr --no-logcapture
and add in the environment.py the follwing snipest:
def after_step(context, step):
print("")
Why: I disovered that behave does not log the last print statement of a step. So I just added a empty print after each step with the previous snipest.
Hope it helped
Importing logging from environment.py in steps.py solved problem for me.
from features.environment import logging
I am not sure but I guess the problem is that every time you import logging it rewrites your previous configs because disable_existing_loggers is True by default. (Here is the documentation paragraph explaining this)
We are trying to write an automated test for the behavior of the AppConfig.ready function, which we are using as an initialization hook to run code when the Django app has loaded. Our ready method implementation uses a Django setting that we need to override in our test, and naturally we're trying to use the override_settings decorator to achieve this.
There is a snag however - when the test runs, at the point the ready function is executed, the setting override hasn't kicked in (it is still using the original value from settings.py). Is there a way that we can still override the setting in a way where the override will apply when the ready function is called?
Some code to demonstrate this behavior:
settings.py
MY_SETTING = 'original value'
dummy_app/__init__.py
default_app_config = 'dummy_app.apps.DummyAppConfig'
dummy_app/apps.py
from django.apps import AppConfig
from django.conf import settings
class DummyAppConfig(AppConfig):
name = 'dummy_app'
def ready(self):
print('settings.MY_SETTING in app config ready function: {0}'.format(settings.MY_SETTING))
dummy_app/tests.py
from django.conf import settings
from django.test import TestCase
from django.test.utils import override_settings
#override_settings(MY_SETTING='overridden value')
#override_settings(INSTALLED_APPS=('dummy_app',))
class AppConfigTests(TestCase):
def test_to_see_where_overridden_settings_value_is_available(self):
print('settings.MY_SETTING in test function: '.format(settings.MY_SETTING))
self.fail('Trigger test output')
Output
======================================================================
FAIL: test_to_see_where_overridden_settings_value_is_available (dummy_app.tests.AppConfigTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/labminds/venv/labos/src/dom-base/dummy_app/tests.py", line 12, in test_to_see_where_overridden_settings_value_is_available
self.fail('Trigger test output')
AssertionError: Trigger test output
-------------------- >> begin captured stdout << ---------------------
settings.MY_SETTING in app config ready function: original value
settings.MY_SETTING in test function: overridden value
--------------------- >> end captured stdout << ----------------------
It is important to note that we only want to override this setting for the tests that are asserting the behavior of ready, which is why we aren't considering changing the setting in settings.py, or using a separate version of this file used just for running our automated tests.
One option already considered - we could simply initialize the AppConfig class in our test, call ready and test the behavior that way (at which point the setting would be overridden by the decorator). However, we would prefer to run this as an integration test, and rely on the natural behavior of Django to call the function for us - this is key functionality for us and we want to make sure the test fails if Django's initialization behavior changes.
Some ideas (different effort required and automated assurance):
Don't integration test, and rely on reading the releas notes/commits before upgrading the Django version and / or rely on single manual testing
Assuming a test - stage deploy - prod deploy pipeline, unit test the special cases in isolation and add an integration check as a deployment smoke test (e.g.: by exposing this settings value through a management command or internal only url endpoint) - only verify that for staging it has the value it should be for staging. Slightly delayed feedback compared to unit tests
test it through a test framework outside of Django's own - i.e.: write the unittests (or py.tests) and inside those tests bootstrap django in each test (though you need a way to import & manipulate the settings)
use a combination of overriding settings via the OS's environment (we've used envdir a'la 12 factor app) and a management command that would do the test(s) - e.g.: MY_SETTING='overridden value' INSTALLED_APPS='dummy_app' EXPECTED_OUTCOME='whatever' python manage.py ensure_app_config_initialized_as_expected
looking at Django's own app init tests apps.clear_cache() and
with override_settings(INSTALLED_APPS=['test_app']):
config = apps.get_app_config('test_app')
assert config....
could work, though I've never tried it
You appear to have hit a documented limitation of ready in Django (scroll down to the warning). You can see the discussion in the ticket that prompted the edit. The ticket specifically refers to database interactions, but the same limitation would apply to any effort to test the ready function -- i.e. that production (not test) settings are used during ready.
Based on the ticket, "don't use ready" sounds like the official answer, but I don't find that attitude useful unless they direct me to a functionally equivalent place to run this kind of initialization code. ready seems to be the most official place to run once on startup.
Rather than (re)calling ready, I suggest having ready call a second method. Import and use that second method in your tests cases. Not only will your tests be cleaner, but it isolates the test case from any other ready logic like attaching signals. There's also a context manager that can be used to simplify the test:
#override_settings(SOME_SETTING='some-data')
def test(self):
...
or
def test(self):
with override_settings(SOME_SETTING='some-data'):
...
P.S. We work around several possible issues in ready by checking the migration status of the system:
def ready(self):
# imports have to be delayed for ready
from django.db.migrations.executor import MigrationExecutor
from django.conf import settings
from django.db import connections, DEFAULT_DB_ALIAS
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
if plan:
# not healthy (possibly setup for a migration)
return
...
Perhaps something similar could be done to prevent execution during tests. Somehow the system knows to (eventually) switch to test settings. I assume you could skip execution under the same conditions.
Up to now, I've been peppering my code with 'print debug message' and even 'if condition: print debug message'. But a number of people have told me that's not the best way to do it, and I really should learn how to use the logging module. After a quick read, it looks as though it does everything I could possibly want, and then some. It looks like a learning project in its own right, and I want to work on other projects now and simply use the minimum functionality to help me. If it makes any difference, I am on python 2.6 and will be for the forseeable future, due to library and legacy compatibilities.
All I want to do at the moment is pepper my code with messages that I can turn on and off section by section, as I manage to debug specific regions. As a 'hello_log_world', I tried this, and it doesn't do what I expected
import logging
# logging.basicConfig(level=logging.DEBUG)
logging.error('first error')
logging.debug('first debug')
logging.basicConfig(level=logging.DEBUG)
logging.error('second error')
logging.debug('second debug')
You'll notice I'm using the really basic config, using as many defaults as possible, to keep things simple. But appears that it's too simple, or that I don't understand the programming model behind logging.
I had expected that sys.stderr would end up with
ERROR:root:first error
ERROR:root:second error
DEBUG:root:second debug
... but only the two error messages appear. Setting level=DEBUG doesn't make the second one appear. If I uncomment the basicConfig call at the start of the program, all four get output.
Am I trying to run it at too simple a level?
What's the simplest thing I can add to what I've written there to get my expected behaviour?
Logging actually follows a particular hierarchy (DEBUG -> INFO -> WARNING -> ERROR -> CRITICAL), and the default level is WARNING. Therefore the reason you see the two ERROR messages is because it is ahead of WARNING on the hierarchy chain.
As for the odd commenting behavior, the explanation is found in the logging docs (which as you say are a task unto themselves :) ):
The call to basicConfig() should come before any calls to debug(),
info() etc. As it’s intended as a one-off simple configuration
facility, only the first call will actually do anything: subsequent
calls are effectively no-ops.
However you can use the setLevel parameter to get what you desire:
import logging
logging.getLogger().setLevel(logging.ERROR)
logging.error('first error')
logging.debug('first debug')
logging.getLogger().setLevel(logging.DEBUG)
logging.error('second error')
logging.debug('second debug')
The lack of an argument to getLogger() means that the root logger is modified. This is essentially one step before #del's (good) answer, where you start getting into multiple loggers, each with their own specific properties/output levels/etc.
Rather than modifying the logging levels in your code to control the output, you should consider creating multiple loggers, and setting the logging level for each one individually. For example:
import logging
first_logger = logging.getLogger('first')
second_logger = logging.getLogger('second')
logging.basicConfig()
first_logger.setLevel(logging.ERROR)
second_logger.setLevel(logging.DEBUG)
first_logger.error('first error')
first_logger.debug('first debug')
second_logger.error('second error')
second_logger.debug('second debug')
This outputs:
ERROR:first:first error
ERROR:second:second error
DEBUG:second:second debug