Set global minimum logging level across all loggers in Python/Django - python

At the time of invocation of my application, I'd like to be able to read in (from config or command line or environment variable) the minimum logging level across ALL of my loggers and then globally set the logging module to respect this MIN_LOGGING_LEVEL.
For example, something like this (which doesn't exist): logging.setLevel(logging.INFO)
I understand that I can individually, on the loggers and on handlers, set the logging levels, as well as create filters for handlers to respect things like the DEBUG flag. However, what I really want is to be able to change the minimum log level across all loggers at invocation time (and depending on environment, for example).
Do I have to roll this myself and construct the configuration dynamically at runtime to accomplish this? Or is there a better way or pattern that I'm not seeing?

You can use logging.disable - the following ensures logging.INFO and below are disabled:
logging.disable(logging.INFO)
From the documentation:
Its effect is to disable all logging calls of severity lvl and below, so that if you call it with a value of INFO, then all INFO and DEBUG events would be discarded, whereas those of severity WARNING and above would be processed according to the logger’s effective level.
To undo it later, you can call:
logging.disable(logging.NOTSET)

As an alternative to using the disable function, you can override the default level setting of WARNING as follows:
import logging
logging.getLogger().setLevel(logging.INFO) # choose your level here
I wouldn't call this a global solution per-se, as it requires a file-level declaration, but it's useful IMO for simple debugging during development before a more cohesive project hierarchy has been established.

I just ran into something similar, but I went in a different direction. I ended up with...
import logging
logging.root.setLevel(logging.DEBUG)
Python loggers work hierarchically, so setting it on the root logger affects all downstream loggers inheriting from it.
In a few other examples, the code, logging.getLogger(), also returns the root logger, so the code here is essentially the same as those examples.
With knowledge of the hierarchical nature of loggers, the new information here is that you don't need to set the level globally, on the root. You can be selective while still affecting multiple modules/loggers.
If you use the code log = logging.getLogger(__name__) to retrieve your logger in each module, and if you configure your logger inside the __init__.py file of a package, you can set the level on that specific logger, affecting all loggers inside that package.
- package_a
- __init__.py
- module_a.py
- module_b.py
- package_b
- __init__.py
- module_c.py
- module_d.py
Say you have packages and modules set up, as is shown above.
import logging
logging.getLogger('package_a').setLevel(logging.DEBUG)
Running this code will make it so module_a.py and module_b.py are now configured to DEBUG level, while module_c and module_d remain unaltered.

I think what you're looking for is this:
logging.getLogger().setLevel(logging.INFO)

Related

How to disable the logging of Uvicorn?

I am working on FastAPI - Uvicorn. I want to disable the logging by uvicorn. I need only the logs which are logged by the server.
I referred to this blog and implemented the logging.
You could change log level for getting only needed messages, there are a bunch of possible options:
uvicorn main:app --log-level critical
I think I had the same issue.
To disable the logger one must first find the loggers, that shall be disabled. I did this by following this stackoverflow post and this GitHub-Issue.
For me it works to disable just two loggers from uvicorn:
import logging
# ....CODE....
uvicorn_error = logging.getLogger("uvicorn.error")
uvicorn_error.disabled = True
uvicorn_access = logging.getLogger("uvicorn.access")
uvicorn_access.disabled = True
At first I tried the answer provided by #Sanchouz, but this didn't work out for me - Further setting propagate = false is by some regarded as a bad practice (see this) . As I wanted to do it programmatically I couldn't test the answer provided by #funnydman.
Hope this helps anyone, thinklex.
Problem definition
I had a similar problem and I found a solution. In my case, I created a small website with FastAPI to launch web scrappers in separate processes. I also created class-wrapper for loggers from logging module. My problem was: when I started my app inside Docker container with uvicorn ... command, which includes some settings for logging into file, all logging from any web scrapper would go into both scrapper's separate log file and server's log file. I have a lot of stuff to log, so it's quite a problem.
Short answer
When you get your logger, just set it's propagate property to False like this:
logger = logging.getLogger(logger_name)
logger.propagate = False
Long answer
At first I spent some time debugging insides of a logging module and I found a function called callHandlers, which loops through handlers of current logger and it's parents. I wrongly assumed, that root logger was responsible for that problem, but after some more testing it turned out, that actually root logger didn't have any handlers. It means, that some of uvicorn's logger was responsible for that, which makes sense, also considering Thinklex's solution. I tried his solution too, but it doesn't fit me, because it disabled uvicorn's logging completely, and I don't want that, so I'll stick with preventing propagation on my loggers.

Python switch a logger off by default

In Python, I want to optionally log from a module - but have this logging off by default, enabled with a function call. (The output from this file will be very spammy - so best off by default)
I want my code to look something like this.
log = logging.getLogger("module")
log.switch_off()
---
import module
module.log.switch_on()
I can't seem to find an option to disable a logger.
Options considered:
Using filters: I think this is a bit confusing for the client
Setting a level higher than one I use to log: (e.g. logging.CRITICAL). I don't like that we could inadvertently throw log lines into normal output if we use that level.
Use a flag and add ifs
Require the client to exclude our log events. See logging config
There are two pieces at play here. Python has logging.Logger objects and logging.Handler objects that work together to serve you logging information. Loggers handle the logic of collecting logging information, and deciding whether logs should be emitted to associated handlers. If the logging level of your log record is less severe than the level specified in the logger, it will not pass info to associated handlers.
Handlers have the same feature, and since handlers are the last line between log records and defined output, you would likely want to disable the interaction there. To accomplish this, and avoid having logs inadvertently logged elsewhere, you can add a new logging level to your application:
logging.addLevelName(logging.CRITICAL + 1, "DISABLELOGGING")
Note: This only maps the name to value for purposes of formatting, so you will need to add a member to the logging module as well:
logging.DISABLELOGGING = logging.CRITICAL + 1
Setting it to a value higher than CRITICAL ensures that no normal log event will pass and be emitted.
Then you just need to set your handler to the level you defined:
handler.setLevel(logging.DISABLELOGGING)
and now there should be no logs that pass the handler, and therefore no output shown.

Enable debug logging for just one sub logger (via config, not code)

I use the Python logging module in a rather "default" setup: Configuration happens via config file in the ini format and using logging.config.fileConfig. Logger names are the module names, so I have loggers called like this:
myapp.submoduleA
myapp.submoduleB
myapp.submoduleB
externalLibA
externalLibB
In my default setup, I set the log level of myapp to INFO and for externalLibA and externalLibBto WARN. But now I want to enable DEBUG for myapp.submoduleB only. Is there a simple way to do this, without writing code and by not explicitly configuring all other sub handlers?
The only options I currently see is to not configure a level for myapp, configure DEBUG for myapp.submoduleB and then manually configure INFO for all other sub handlers. As I have more than three of them in real life, this would be annoying and I wonder how other people handle this.
If you leave logger myapp at level INFO and just set logger myapp.submoduleB to level DEBUG, this should work as expected - myapp and myapp.submoduleX will be at INFO except for myapp.submoduleB which will be at DEBUG. Make sure you ensure disable_existing_loggers to False when calling fileConfig().

Python - how do I configure a child logger via a logging configuration file

I have a Python package that contains a series of modules, each of which contains a library of routines with a central focus (i.e. one module may have text manipulation related routines). I would like to be able to configure logging for the entire package, but give the user the ability to customize logging for a given module.
I’ve tried to accomplish this using parent/child loggers so that each module didn’t have to have more than 1 logger. I can use the parent/child approach successfully if I configure the child loggers in the module itself, but I cannot find the correct syntax for configuring child loggers via the logger configuration file. When the Python logging utility reads in the configuration file, it chokes on any logger ‘keys’ containing dot notation (i.e. if I try to define loggers "root, parent, parent.child" the ingest of the configuration file will choke on the "parent.child" key).
So, I'm looking for a solution whereby a given module only has to have a single logger, but the user can configurr logging for the entire package one way, but a different way for a specific module. The parent/child approach seemed the most promising.
If there is a way for the module to determine what loggers the logging manager has definitions for, I could use that also. For example, lets say I could define two loggers in my logging configure file named allUtil and textUtil. If the text utility module could determine if the logging manager had a textUtil logger defined, it could use that one, other wise it would use the allUtil logger. Unfortunately I have not been able to figure out the correct Python code to do this either.
I am open to suggestions.
Here is what I came up with that appears to work.
Setup:
I have an application called 'myapp'
I have a package it uses called 'utils'
I have a module in package 'utils' called 'miscUtils'
In the log configuration file I have the following (only pertinent content shown)
[loggers]
keys=root,myapp,utils,miscUtils
.
.
.
[logger_root]
level=DEBUG
handlers=consoleHandler
[logger_myapp]
level=CRITICAL
handlers=consoleHandler
qualname=myapplogger
propagate=0
[logger_utils]
level=WARNING
handlers=consoleHandler
qualname=myapplogger.utils
propagate=1
[logger_miscUtils]
level=DEBUG
handlers=consoleHandler
qualname=myapplogger.utils
propagate=1
.
.
.
In the module utils.miscUtils.py I use the logger 'myapplogger.utils.misc'.
In all other modules in the 'utils' package they use loggers
'myapplogger.utils.xxx' with xxx representing a particular module.
In the main program module myapp.py I use the logger 'myapplogger',
but I explicitly set its level to INFO.
When I run the program, log messages from the main module issue logs messages from INFO on down. All 'utils' modules but miscUtils.py, will log messages from WARNING on down. However, all log messages generated from the module miscUtils.py, which uses the logger 'myapplogger.utils.miscUtils' issues log messages from DEBUG on down.
Via the configuration file I can now define overall program log levels, package log levels, and module log levels, as needed.

Advantages of logging vs. print() + logging best practices

I'm currently working on 1.0.0 release of pyftpdlib module.
This new release will introduce some backward incompatible changes in
that certain APIs will no longer accept bytes but unicode.
While I'm at it, as part of this breackage, I was contemplating the
possibility to get rid of my logging functions, which currently use the
print statement, and use the logging module instead.
As of right now pyftpdlib delegates the logging to 3 functions:
def log(s):
"""Log messages intended for the end user."""
print s
def logline(s):
"""Log commands and responses passing through the command channel."""
print s
def logerror(s):
"""Log traceback outputs occurring in case of errors."""
print >> sys.stderr, s
The user willing to customize logs (e.g. write them to a file) is
supposed to just overwrite these 3 functions as in:
>>> from pyftpdlib import ftpserver
>>>
>>> def log2file(s):
... open('ftpd.log', 'a').write(s)
...
>>> ftpserver.log = ftpserver.logline = ftpserver.logerror = log2file
Now I'm wondering: what benefits would imply to get rid of this approach
and use logging module instead?
From a module vendor perspective, how exactly am I supposed to
expose logging functionalities in my module?
Am I supposed to do this:
import logging
logger = logging.getLogger("pyftpdlib")
...and state in my doc that "logger" is the object which is supposed
to be used in case the user wants to customize how logs behave?
Is it legitimate to deliberately set a pre-defined format output as in:
FORMAT = '[%(asctime)] %(message)s'
logging.basicConfig(format=FORMAT)
logger = logging.getLogger('pyftpdlib')
...?
Can you think of a third-party module I can take cues from where the logging functionality is exposed and consolidated as part of the public API?
Thanks in advance.
libraries (ftp server or client library) should never initialize the logging system.
So it's ok to instantiate a logger object and to point at logging.basicConfig in the
documentation (or provide a function along the lines of basicConfig with fancier output
and let the user choose among his logging configuration strategy, plain basicConfig or
library provided configuration)
frameworks (e.g. django) or servers (ftp server daemon)
should initialize the logging system to a reasonable
default and allow for customization of logging system configuration.
Typically libraries should just create a NullHandler handler, which is simply a do nothing handler. The end user or application developer who uses your library can then configure the logging system. See the section Configuring Logging for a Library in the logging documentation for more information. In particular, see the note which begins
It is strongly advised that you do not add any handlers other than NullHandler to your library's loggers.
In your case I would simply create a logging handler, as per the logging documentation,
import logging
logging.getLogger('pyftpdlib').addHandler(logging.NullHandler())
Edit The logging implementation sketched out in the question seems perfectly reasonable. In your documentation just mention logger and discuss or point users to the logging.setLevel and logging.setFormatter methods for customising the output from your library. Rather than using logging.basicConfig(format=FORMAT) you could consider using logging.config.fileConfig to manage the settings for your output and document the configuration file somewhere in your documentation, again pointing the user to the logging module documentation for the format expected in this file.
Here is a resource I used to make a customizable logger. I didn't change much, I just added an if statement, and pass in whether or not I want to log to a file or just the console.
Check this Colorer out. It's really nice for colorizing the output so DEBUG looks different than WARN which looks different than INFO.
The Logging module bundles a heck of a lot of nice functionality, like SMTP logging, file rotation logging (so you can save a couple old log files, but not make 100s of them every time something goes wrong).
If you ever want to migrate to Python 3, using the logging module will remove the need to change your print statements.
Logging is awesome depending on what you're doing, I've only lightly used it before to see where I am in a program (if you're running this function, color this way), but it has significantly more power than a regular print statement.
You can look at Django (just create a sample project) and see how it initialize logger subsystem.
There is also a contextual logger helper that I've written some time ago - this logger automatically takes name of module/class/function is was initialized from. This is very useful for debug messages where you can see right-through that module spits the messages and how the call flow goes.

Categories