Patching python's logging handlers - python

I'm writing unit tests to make sure that my log messages for syslog are limited in size. In particular, my Formatter is set like this:
import logging
from logging.handlers import SysLogHandler
[...]
default_fmt = logging.Formatter(
'%(name)s:%(levelname)s - %(message).{}s'.format(MAX_LOG_MESSAGE_SIZE)
)
All that I want to know is that the final message is less than a certain number of characters. I assumed I needed to patch SysLogHandler like this:
with patch('mylib.logger.logging.handlers.SysLogHandler') as m:
import mylib.logger
msg = 'foo'
mylib.logger.debug(msg)
print m.mock_calls
print m.call_list()
However running the tests with nosetests doesn't work:
ImportError: No module named logging
What am I forgetting?

Related

Python logging for custom namespace misses prefix

I tried to enable logging for my python library. I want all logs to go into my custom subset my_logger instead of root. So this is what I tried:
import logging
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
And for some reason my custom logger didn't output subset name (my_logger) in front of it.
my hello!
WARNING:root:hello!
A simple change of the order of root and my loggers fixed the issue:
import logging
logging.warning("hello!")
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
Output
WARNING:root:hello!
WARNING:my_logger:my hello!
I do not ever want to use a root logger at all. Is it possible to get WARNING:my_logger prefix to my output without logging to the root logger first?
Seems like the logging module is doing first-time configuration during the first call. You can do that first to get it out of the way before using your logger:
import logging
logging.basicConfig() # do the configuration
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
Result:
WARNING:my_logger:my hello!
WARNING:root:hello!
For a library that you expect others to import, this should be done by the code importing the module instead of the library.

Python Logging - AttributeError: module 'logging' has no attribute 'handlers'

Observation: When I comment out from logging import handlers the below-mentioned error is observed.
Error:
file_handler = logging.handlers.RotatingFileHandler(
AttributeError: module 'logging' has no attribute 'handlers'
Question: If i have imported the logging why is required to do from logging import handlers?
import logging
import sys
#from logging import handlers
def LoggerDefination():
#file_handler = logging.FileHandler(filename='..\\logs\\BasicLogger_v0.1.log', mode='a')
file_handler = logging.handlers.RotatingFileHandler(
filename="..\\logs\\BasicLogger_v0.2.log",
mode='a',
maxBytes=20000,
backupCount=7,
encoding=None,
delay=0
)
file_handler.setLevel(logging.DEBUG)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.DEBUG)
handlers = [file_handler, stdout_handler]
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s | %(module)s | %(name)s | LineNo_%(lineno)d | %(levelname)s | %(message)s',
handlers=handlers
)
def fnt_test_log1():
LoggerDefination()
WriteLog1 = logging.getLogger('fnt_test_log1')
#WriteLog1.propagate=False
WriteLog1.info("######## START OF : test_log1 ##########")
WriteLog1.debug("test_log1 | This is debug level")
WriteLog1.debug("test_log1 | This is debug level")
WriteLog1.info("test_log1 | This is info level")
WriteLog1.warning("test_log1 | This is warning level")
WriteLog1.error("test_log1 | This is error level")
WriteLog1.critical("test_log1 |This is critiacl level")
WriteLog1.info("######## END OF : test_log1 ##########")
def fnt_test_log2():
LoggerDefination()
WriteLog2 = logging.getLogger('fnt_test_log2')
WriteLog2.info("######## START OF : test_log2 ##########")
WriteLog2.debug("test_log2 ===> debug")
WriteLog2.debug("test_log2 | This is debug level")
WriteLog2.debug("test_log2 | This is debug level")
WriteLog2.info("test_log2 | This is info level")
WriteLog2.warning("test_log2 | This is warning level")
WriteLog2.error("test_log2 | This is error level")
WriteLog2.critical("test_log2 |This is critiacl level")
WriteLog2.info("######## STOP OF : test_log2 ##########")
if __name__ == '__main__':
LoggerDefination()
MainLog = logging.getLogger('main')
LoggerDefination()
MainLog.info("Executing script: " + __file__)
fnt_test_log1()
fnt_test_log2()
This is a two-part answer: The first part concerns your immediate question. The second part concerns the question why your question might have come up in the first place (and, well, how I ended up in this thread, given that your question is already two months old).
First part (the answer to your question): When importing a package like import logging, Python by default never imports subpackages (or submodules) like logging.handlers but only exposes variables to you that are defined in the package's __init__.py file (in this case logging/__init__.py). Unfortunately, it's hard to tell from the outside if logging.handlers is a variable in logging/__init__.py or an actual separate module logging/handlers.py. So you have to take a look at logging/__init__.py and then you will see that it doesn't define a handlers variable and that, instead, there's a module logging/handlers.py which you need to import separately through import logging.handlers or from logging.handlers import TheHandlerYouWantToUse. So this should answer your question.
Second part: I recently noticed that my IDE (PyCharm) always suggests import logging whenever I actually want to use logging.handlers.QueueHandler. And for some arcane reason (and despite what I said in the first part) it's been working! …well, most of the time.
Specifically, in the following code, the type annotation causes the expected AttributeError: module 'logging' has no attribute 'handlers'. However, after commenting out the annotation (which for Python < 3.9 gets executed during module execution time), calling the function works – provided I call it "late enough" (more on that below).
import logging
from multiprocessing.queues import Queue
def redirect_logs_to_queue(
logging_queue: Queue, level: int = logging.DEBUG
) -> logging.handlers.QueueHandler: # <-------------------------------- This fails
queue_handler = logging.handlers.QueueHandler(logging_queue) # <--- This works
root = logging.getLogger()
root.addHandler(queue_handler)
root.setLevel(level)
return queue_handler
So what does "late enough" mean? Unfortunately, my application was a bit too complex to do some quick bug hunting. Nevertheless, it was clear to me that logging.handlers must "become available" at some point between start-up (i.e. all my modules getting loaded/executed) and invoking that function. This gave me the decisive hint: Turns out, some another module deep down the package hierarchy was doing from logging.handlers import RotatingFileHandler, QueueListener. This statement loaded the entire logging.handlers module and caused Python to "mount" the handlers module in its parent package logging, meaning that the logging variable would henceforth always come equipped with a handlers attribute even after a mere import logging, which is why I didn't need to import logging.handlers anymore (and which is probably what PyCharm noticed).
Try it out yourself:
This works:
import logging
from logging.handlers import QueueHandler
print(logging.handlers)
This doesn't:
import logging
print(logging.handlers)
All in all, this phenomenon is a result of Python's import mechanism using global state to avoid reloading modules. (Here, the "global state" I'm referring to is the logging variable that you get when you import logging.) While it is treacherous sometimes (as in the case above), it does make perfect sense: If you import logging.handlers in multiple modules, you don't want the code in the logging.handlers module to get loaded (and thus executed!) more than once. And so Python just adds the handlers attribute to the logging module object as soon as you import logging.handlers somewhere and since all modules that import logging share the same logging object, logging.handlers will suddenly be available everywhere.
Perhaps you have some other module called logging which masks the standard library one. Is there some file called logging.py somewhere in your code base?
How to diagnose
The best way to figure out the issue is to simply run python3, and then the following commands:
Python 3.8.10
>>> import logging
>>> logging
<module 'logging' from '/home/user/my_personal_package/logging/__init__.py'>
This shows that you are using this OTHER package (my_personal_package) instead of the built-in logging package.
It should say:
>>> import logging
>>> logging
<module 'logging' from '/usr/lib/python3.8/logging/__init__.py'>
You can also just put this in a program to debug it:
import logging
print(logging)
How to fix
Fix this by removing the offending logging.py or logging/__init__.py.

Change log level in unittest

I have the impression (but do not find the documentation for it) that unittest sets the logging level to WARNING for all loggers. I would like to:
be able to specify the logging level for all loggers, from the command line (when running the tests) or from the test module itself
avoid unittest messing around with the application logging level: when running the tests I want to have the same logging output (same levels) as when running the application
How can I achieve this?
I don't believe unittest itself does anything to logging, unless you use a _CapturingHandler class which it defines. This simple program demonstrates:
import logging
import unittest
logger = logging.getLogger(__name__)
class MyTestCase(unittest.TestCase):
def test_something(self):
logger.debug('logged from test_something')
if __name__ == '__main__':
# DEBUG for demonstration purposes, but you could set the level from
# cmdline args to whatever you like
logging.basicConfig(level=logging.DEBUG, format='%(name)s %(levelname)s %(message)s')
unittest.main()
When run, it prints
__main__ DEBUG logged from test_something
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
showing that it is logging events at DEBUG level, as expected. So the problem is likely to be related to something else, e.g. the code under test, or some other test runner which changes the logging configuration or redirects sys.stdout and sys.stderr. You will probably need to provide more information about your test environment, or better yet a minimal program that demonstrates the problem (as my example above shows that unittest by itself doesn't cause the problem you're describing).
See below example for logging in Python. Also you can change LOG_LEVEL using 'setLevel' method.
import os
import logging
logging.basicConfig()
logger = logging.getLogger(__name__)
# Change logging level here.
logger.setLevel(os.environ.get('LOG_LEVEL', logging.INFO))
logger.info('For INFO message')
logger.debug('For DEBUG message')
logger.warning('For WARNING message')
logger.error('For ERROR message')
logger.critical('For CRITICAL message')
This is in addition to #Vinay's answer above. It does not answer the original question. I wanted to include command line options for modifying the log level. The intent was to get detailed loggin only when I pass a certain parameter from the command line. This is how I solved it:
import sys
import unittest
import logging
from histogram import Histogram
class TestHistogram(unittest.TestCase):
def test_case2(self):
h = Histogram([2,1,2])
self.assertEqual(h.calculateMaxAreaON(), 3)
if __name__ == '__main__':
argv = len(sys.argv) > 1 and sys.argv[1]
loglevel = logging.INFO if argv == '-v' else logging.WARNING
logging.basicConfig(level=loglevel)
unittest.main()
The intent is to get more verbose logging. I know it does not answer the question, but I'll leave it here in case someone comes looking for a similar requirement such as this.
this worked for me:
logging.basicConfig(level=logging.DEBUG)
And if I wanted a specific format:
logging.basicConfig(
level=logging.DEBUG,
datefmt="%H:%M:%S",
format="%(asctime)s.%(msecs)03d [%(levelname)-5s] %(message)s",
)
Programmatically:
Put this line of code in each test function defined in your class that you want to set the logging level:
logging.getLogger().setLevel(logging.INFO)
Ex. class:
import unittest
import logging
class ExampleTest(unittest.TestCase):
def test_method(self):
logging.getLogger().setLevel(logging.INFO)
...
Command Line:
This example just shows how to do it in a normal script, not specific to unittest example. Capturing the log level via command line, using argparse for arguments:
import logging
import argparse
...
def parse_args():
parser = argparse.ArgumentParser(description='...')
parser.add_argument('-v', '--verbose', help='enable verbose logging', action='store_const', dest="loglevel", const=logging.INFO, default=logging.WARNING)
...
def main():
args = parse_args()
logging.getLogger().setLevel(args.loglevel)

How should I configure logging in a script and then use that configuration in only my modules?

I want to find out how logging should be organised given that I write many scripts and modules that should feature similar logging. I want to be able to set the logging appearance and the logging level from the script and I want this to propagate the appearance and level to my modules and only my modules.
An example script could be something like the following:
import logging
import technicolor
import example_2_module
def main():
verbose = True
global log
log = logging.getLogger(__name__)
logging.root.addHandler(technicolor.ColorisingStreamHandler())
# logging level
if verbose:
logging.root.setLevel(logging.DEBUG)
else:
logging.root.setLevel(logging.INFO)
log.info("example INFO message in main")
log.debug("example DEBUG message in main")
example_2_module.function1()
if __name__ == '__main__':
main()
An example module could be something like the following:
import logging
log = logging.getLogger(__name__)
def function1():
print("printout of function 1")
log.info("example INFO message in module")
log.debug("example DEBUG message in module")
You can see that in the module there is minimal infrastructure written to import the logging of the appearance and the level set in the script. This has worked fine, but I've encountered a problem: other modules that have logging. This can result in output being printed twice, and very detailed debug logging from modules that are not my own.
How should I code this such that the logging appearance/level is set from the script but then used only by my modules?
You need to set the propagate attribute to False so that the log message does not propagate to ancestor loggers. Here is the documentation for Logger.propagate -- it defaults to True. So just:
import logging
log = logging.getLogger(__name__)
log.propagate = False

How to proxy all methods from a Python module to another?

I want to have in my application a common logging module that logs to a file.
For example in my commonlog.py I can have something like this:
# Python logging module
import logging
logging.basicConfig(filename="test.log", level=logging.DEBUG)
From the other modules in the application I want to import this module and be able to use it like if it was the Python logging module but without replicating all its functions, for example from module test.py:
import commonlog
commonlog.debug("debug message")
commonlog.info("info message")
commonlog.ANY_OTHER_METHOD_THAT_BELONGS_TO_LOGGING()
How can I "proxy" in my commonlog all the methods from the logging module ?
Doing:
commonlogging.logging.etc..
is not a valid solution because it's using the logging module directly.
I've never had to "inherit" from a module before so I don't know it's naive to do a from logging import * at the top of commonlogging. Here's code showing that it appears to work:
>>> with open('mylogging.py', 'w') as f:
... f.write('''from logging import *
... my_customization = "it works"''')
...
>>> import mylogging
>>> print mylogging.my_customization
it works
>>> help(mylogging.log)
Help on function log in module logging:
log(level, msg, *args, **kwargs)
Log 'msg % args' with the integer severity 'level' on the root logger.

Categories