Python: Where to put logging.getLogger - python

I tried to put getLogger in the module level. However, it has some disadvantages. Here is my example:
from logging.handlers import TimeRotatingFileHandler
log_monitor = logging.getLogger('monitorlog')
log_monitor.setLevel(logging.DEBUG)
log_monitor.propagate = False
handler = TimedRotatingFileHandler('somedirectory/monitor.log',
when='h',
interval=1,
backupCount=30)
monitor_format = logging.Formatter('%(asctime)s: %(message)s')
handler.setFormatter(monitor_format)
log_monitor.addHandler(handler)
The problem is, when some other module import this one, the above code will be executed. It is possible, at that time, the somedirectory does not even exist and the build will fail.
Actually, this logger will be used in a class, so I am thinking of putting getLogger into the class. However, I feel if people create multiple object of that class, that piece of code will be called multiple times. I guess this part of code is supposed only to be called once.
I am pretty new to python. Where do people usually put their getLogger and in this case, where should I put this piece of code?

Short answer, you just need to make sure you do your logger set up after the directory is created.
If you want to import the above and only then create the file, one way to do it is to put your code in a function.
def monitor_log_setup():
log_monitor = logging.getLogger('monitorlog')
log_monitor.setLevel(logging.DEBUG)
log_monitor.propagate = False
handler = TimedRotatingFileHandler('somedirectory/monitor.log',
when='h',
interval=1,
backupCount=30)
monitor_format = logging.Formatter('%(asctime)s: %(message)s')
handler.setFormatter(monitor_format)
log_monitor.addHandler(handler)
return log_monitor
It is now safe to import this module, you just have to make sure the function is called before you want to start logging (after creating the directory).
You can then use logging.getLogger('monitorlog') to return the same logger as defined in the function whenever you need it throughout your code.

I think the problem is that you are trying to mix up the "import" with "init", you expect the API caller after import the library or module, the log object is available, this behaviour will leads to confusing understanding.
I think the best practice is to provide an "init" method, the caller call "init" method, make the object available.
However, you could also provide an implicit init way in the file, or just create the log file if it doesn't exist.

Related

How to catch errors that were only logged?

Main Question
I am using a module that relies on logging instead of raising error messages. How can I catch logged errors from within Python to react to them (without dissecting the log file)?
Minimal Example
Suppose logging_module.py looks like this:
import logging
import random
def foo():
logger = logging.getLogger("quz")
if random.choice([True,False]):
logger.error("Doooom")
If this module used exceptions, I could do something like this:
from logging_module import foo, Doooom
try:
foo()
except Doooom:
bar()
Assuming that logging_module is written the way it is and I cannot change it, this is impossible. What can I do instead?
What I considered so far
I went through the logging documentation (though I did not read every word), but the only way to access what is logged seems to be dissecting the actual log, which seems overly tedious to me (but I may misunderstand this).
You can add a filter to the logger that the module uses to inspect every log. The documentation has this to say on using filters for something like that:
Although filters are used primarily to filter records based on more
sophisticated criteria than levels, they get to see every record which
is processed by the handler or logger they’re attached to: this can be
useful if you want to do things like counting how many records were
processed by a particular logger or handler
The code below assumes that you are using the logging_module that you showed in the question and tries to emulate what the try-except does: that is, when an error happens inside a call of foo the function bar is called.
import logging
from logging_module import foo
def bar():
print('error was logged')
def filt(r):
if r.levelno == logging.ERROR:
bar()
return True
logger = logging.getLogger('quz')
logger.addFilter(filt)
foo() # bar will be called if this logs an error

How to disable a try/except block during testing?

I wrote a cronjob that iterates through a list of accounts and performs some web call for them (shown below):
for account in self.ActiveAccountFactory():
try:
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
except Exception,e:
self.logger.exception(traceback.format_exc())
Because this job is run by heroku one every 10 minutes, I do not want the entire job to fail just because one account is running into issues (it happens). I placed a try catch clause here so that this task is "fault-tolerant".
However, I noticed that when I am testing, this try/catch block is giving me cryptic problems because of the task is allowed to continue executing even though there is some serious error.
What is the best way to disable a try/except block during testing?
I've though about implementing the code directly like this:
for account in self.ActiveAccountFactory():
self.logger.debug('Updating %s', account.login)
self.update_account_from_fb(account)
self.check_balances()
self.check_rois()
self.logger.exception(traceback.format_exc())
in my test cases but then this makes my tests very clumsy as I am copying large amounts of code over.
What should I do?
First of all: don't swallow all exceptions using except Exception. It's bad design. So cut it out.
With that out of the way:
One thing you could do is setup a monkeypatch for the logger.exception method. Then you can handle the test however you see fit based on whether it was called, whether it's creating a mock logger, or a separate testing logger, or a custom testing logger class that stops the tests when certain exceptions occur. You could even choose to end the testing immediately by raising an error.
Here is an example using pytest.monkeypatch. I like pytest's way of doing this because they already have a predefined fixture setup for it, and no boilerplate code is required. However, there are others ways to do this as well (such as using unittest.mock.patch as part of the unitest module).
I will call your class SomeClass. What we will do is create a patched version of your SomeClass object as a fixture. The patched version will not log to the logger; instead, it will have a mock logger. Anything that happens to the logger will be recorded in the mock logger for inspection later.
import pytest
import unittest.mock as mock # import mock for Python 2
#pytest.fixture
def SomeClassObj_with_patched_logger(monkeypatch):
##### SETUP PHASE ####
# create a basic mock logger:
mock_logger = mock.Mock(spec=LoggerClass)
# patch the 'logger' attribute so that when it is called on
# 'some_class_instance' (which is bound to 'self' in the method)
# things are re-routed to mock_logger
monkeypatch.setattr('some_class_instance.logger', mock_logger)
# now create class instance you will test with the same name
# as the patched object
some_class_instance = SomeClass()
# the class object you created will now be patched
# we can now send that patched object to any test we want
# using the standard pytest fixture way of doing things
yield some_class_instance
###### TEARDOWN PHASE #######
# after all tests have been run, we can inspect what happened to
# the mock logger like so:
print('\n#### ', mock_logger.method_calls)
If call.exception appears in the method calls of the mock logger, you know that method was called. There are a lot of other ways you could handle this as well, this is just one.
If you're using the logging module, LoggerClass should just be logging.Logger. Alternatively, you can just do mock_logger = mock.Mock(). Or, you could create your own custom testing logger class that raises an exception when its exception method is called. The sky is the limit!
Use your patched object in any test like so:
def test_something(SomeClassObj_with_patched_logger):
# no need to do the line below really, just getting
# a shorter variable name
my_obj = SomeClassObj_with_patched_logger
#### DO STUFF WITH my_obj #####
If you are not familiar with pytest, see this training video for a little bit more in depth information.
try...except blocks are difficult when you are testing because they catch and try to dispose of errors you would really rather see. As you have found out. While testing, for
except Exception as e:
(don't use Exception,e, it's not forward-compatible) substitute an exception type that is really unlikely to occur in your circumstances, such as
except AssertionError as e:
A text editor will do this for you (and reverse it afterwards) at the cost of a couple of mouse-clicks.
You can make callables test-aware by add a _testing=False parameter. Use that to code alternate pathways in the callable for when testing. Then pass _testing=True when calling from a test file.
For the situation presented in this question, putting if _testing: raise in the exception body would 'uncatch' the exception.
Conditioning module level code is tricker. To get special behavior when testing module mod in package pack, I put
_testing = False # in `pack.__init__`
from pack import _testing # in pack.mod
Then test_mod I put something like:
import pack
pack._testing = True
from pack import mod

How to extend built-in classes in python

I'm writing some code for an esp8266 micro controller using micro-python and it has some different class as well as some additional methods in the standard built in classes. To allow me to debug on my desktop I've built some helper classes so that the code will run. However I've run into a snag with micro-pythons time function which has a time.sleep_ms method since the standard time.sleep method on micropython does not accept floats. I tried using the following code to extend the built in time class but it fails to import properly. Any thoughts?
class time(time):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def sleep_ms(self, ms):
super().sleep(ms/1000)
This code exists in a file time.py. Secondly I know I'll have issues with having to import time.time that I would like to fix. I also realize I could call this something else and put traps for it in my micro controller code however I would like to avoid any special functions in what's loaded into the controller to save space and cycles.
You're not trying to override a class, you're trying to monkey-patch a module.
First off, if your module is named time.py, it will never be loaded in preference to the built-in time module. Truly built-in (as in compiled into the interpreter core, not just C extension modules that ship with CPython) modules are special, they are always loaded without checking sys.path, so you can't even attempt to shadow the time module, even if you wanted to (you generally don't, and doing so is incredibly ugly). In this case, the built-in time module shadows you; you can't import your module under the plain name time at all, because the built-in will be found without even looking at sys.path.
Secondly, assuming you use a different name and import it for the sole purpose of monkey-patching time (or do something terrible like adding the monkey patch to a custom sitecustomize module, it's not trivial to make the function truly native to the monkey-patched module (defining it in any normal way gives it a scope of the module where it was defined, not the same scope as other functions from the time module). If you don't need it to be "truly" defined as part of time, the simplest approach is just:
import time
def sleep_ms(ms):
return time.sleep(ms / 1000)
time.sleep_ms = sleep_ms
Of course, as mentioned, sleep_ms is still part of your module, and carries your module's scope around with it (that's why you do time.sleep, not just sleep; you could do from time import sleep to avoid qualifying it, but it's still a local alias that might not match time.sleep if someone else monkey-patches time.sleep later).
If you want to make it behave like it's part of the time module, so you can reference arbitrary things in time's namespace without qualification and always see the current function in time, you need to use eval to compile your code in time's scope:
import time
# Compile a string of the function's source to a code object that's not
# attached to any scope at all
# The filename argument is garbage, it's just for exception traceback
# reporting and the like
code = compile('def sleep_ms(ms): sleep(ms / 1000)', 'time.py', 'exec')
# eval the compiled code with a scope of the globals of the time module
# which both binds it to the time module's scope, and inserts the newly
# defined function directly into the time module's globals without
# defining it in your own module at all
eval(code, vars(time))
del code, time # May as well leave your monkey-patch module completely empty

Avoid `logger=logging.getLogger(__name__)` without loosing way to filter logs

I am lazy and want to avoid this line in every python file which uses logging:
logger = logging.getLogger(__name__)
In january I asked how this could be done, and found an answer: Avoid `logger=logging.getLogger(__name__)`
Unfortunately the answer there has the drawback, that you loose the ability to filter.
I really want to avoid this useless and redundant line.
Example:
import logging
def my_method(foo):
logging.info()
Unfortunately I think it is impossible do logger = logging.getLogger(__name__) implicitly if logging.info() gets called for the first time in this file.
Is there anybody out there who knows how to do impossible stuff?
Update
I like Don't Repeat Yourself. If most files contain the same line at the top, I think this is a repetition. It looks like WET. The python interpreter in my head needs to skip this line every time I look there. My subjective feeling: this line is useless bloat. The line should be the implicit default.
Think well if you really want to do this.
Create a Module e.g. magiclog.py like this:
import logging
import inspect
def L():
# FIXME: catch indexing errors
callerframe = inspect.stack()[1][0]
name = callerframe.f_globals["__name__"]
# avoid cyclic ref, see https://docs.python.org/2/library/inspect.html#the-interpreter-stack
del callerframe
return logging.getLogger(name)
Then you can do:
from magiclog import L
L().info("it works!")
I am lazy and want to avoid this line in every python file which uses
logging:
logger = logging.getLogger(__name__)
Well, it's the recommended way:
A good convention to use when naming loggers is to use a module-level
logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
This means that logger names track the package/module hierarchy, and
it’s intuitively obvious where events are logged just from the logger
name.
That's a quote from the official howto.
I like Don't Repeat Yourself. If most files contain the same line at
the top, I think this is a repetition. It looks like WET. The python
interpreter in my head needs to skip this line every time I look
there. My subjective feeling: this line is useless bloat. The line
should be the implicit default.
It follows "Explicit is better than implicit". Anyway you can easily change a python template in many IDEs to always include this line or make a new template file.

When should a Django logging instance be created?

Should a logging instance whose runtime configuration will never be altered be created (via getLogger) inside of each function that uses it, or can I create it once and only once outside of the functions?
Example:
import logging
def homepage_view(...):
log = logging.getLogger(...)
log.debug('Loaded the homepage')
or
import logging
log = logging.getLogger(...)
def homepage_view(...):
log.debug('Loaded the homepage')
The second of these is the recommended best practice, using
log = logging.getLogger(__name__)
at the module level.
Update: It's the best practice because it's simpler. Nothing is gained by invoking getLogger in each function that uses it, and loggers are singletons anyway.

Categories